title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
Control and Realism: Best of Both Worlds in Layout-to-Image without Training
Accept (poster)
Summary: This work introduces WinWinLay, a novel training-free optimization strategy for layout-to-image generation using text-to-image diffusion models. The paper tackles two main drawback of previous approaches for layout-to-image generation: (1) that the generated objects are often not precisely places within the given bounding boxes, and (2) that there is clear trade-off between the controllability from the layout and the overall quality of the image. First, the authors provide a theoretical analysis on the backward guidance, and propose a non-local attention energy function for better adherence to the bounding boxes. Then, to mitigate the quality-accuracy trade-off, the authors introduce the adaptive update based on Langevin-dynamics. The experimental results show that WinWinLay sets the state-of-the-art on training-free layout-to-image generation. Claims And Evidence: 1. The problem definition of the paper is reasonable. Training-free layout-to-image generation is a practically important application, and there have been limitations in the layout adherence and the image quality, and a trade-off between the two. 2. The authors provide a thorough analysis on the optimization behavior of the attention energy function and why it becomes difficult for the generated object to cover the whole bounding box region. The proposed solution, non-local attention prior, is a intuitive choice to mitigate this. However, to claim its novelty and clear effectiveness compared to previous approaches, further justications on the design of the prior distribution $\tau_u$ and comparisons with previous works is required. For further details on this, please refer to the below section. 3. The authors identify the critical issue of the trade-off between generation (layout) control and image quality. Applying the Langevin dynamics-based update to alleviate this trade-off seems to be a clear and valid approach. 4. While bringing the idea of Nash-MTL seems novel and acts as a key technical detail of WinWinLay, it is hard to understand its effectiveness clearly. Perhaps an ablation study with and without applying this idea could help undertand better. Methods And Evaluation Criteria: 1. Introducing the "non-local attention prior" for encouraging the object to cover a larger region within the bounding box seems as a reasonable choice. However, defining the prior distribution to be centered at the "centroid" of the bounding box would need a better justification. It seems to be quite a hard constraint when we want an object to be generated within the box. For some objects, it is more intuitive to think that its center of mass is at the lower region of the box (e.g. a car) rather than the center. Have the authors encountered any failure cases in which enforcing the center to be at the centroid of the box led to degradation of the objects? 2. While one of the key objectives of "non-local attention prior" is to maximize the coverage of the bounding box, there is no discussion and comparison to R&B [1], which also addresses a similar issue by introducing a boundary-aware loss using the Sobel operator. Since R&B is the state-of-the-art method on training-free layout-to-image generation, it is crucial to discuss which aspect WinWinLay has enabled it to obtain results that R&B had failed to achieve. 3. The paper lacks critical baselines for comparison. Please refer to the "Essential References" section for this. 4. For the user study to have validify, the authors should provide more details on its setup. For instance, how were the participants gathered (e.g. Amazon Mechanical Turk), and how many people participated? What was the question format and how many images were presented at each question? 5. In addition to user study, there are multiple metrics that can be used to measure the image quality. FID, PickScore [2] and ImageReward [3] are widely used metrics for this. User study alone seems insufficient to claim the clear advantage of WinWinLay compared to the baselines. [1] R&B: Region and Boundary Aware Zero-shot Grounded Text-to-image Generation, Xiao et al., ICLR 2024 [2] Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation, Kirstain et al., NeurIPS 2023 [3] ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation, Xu et al., NeurIPS 2023 Theoretical Claims: Each theoretical claim in the paper is supported by a proof, which seem to be valid. Experimental Designs Or Analyses: 1. The paper doesn't present sufficient exploration on the previous state-of-the-art methods, and misses out important baselines. Please see the "Essential References" section for this. 2. Since enhancing the image quality is a key objective of WinWinLay, it is expected to show more concrete comparisons on image quality compared to the baselines. Other than the user study, there are multiple widely used metrics to measure the image quality and diversity. I have discussed this in the above section. Supplementary Material: The authors provide a detailed review on Nash-MTL, which helps understand the key contribution of this work to treat the dual optimization problem as a bargaining game. Relation To Broader Scientific Literature: In the broader field of image generation, enhancing user control is important. Suggesting accurate layout-based controls while preserving the image quality would extend the applicability of text-to-image model to practical use cases in which the users requires precise region control. Essential References Not Discussed: The paper lacks critical references to the previous works on the topic of "training-free layout-to-image generation", which should have also been considered as baselines for quantitative/qualititative comparison. Notably, Attention-Refocusing [1] and R&B [2] were previous works on the same topic, both published at a conference in 2024. These works also aim to address the problem of limited adherence to the bounding box inputs, and the degrated image quality due to the introduction of the backward guidance. Therefore, without discussions on these prior works and comparisons between the outputs of WinWinLay and these baselines, it would be difficult to claim the novelty and the effectiveness of the proposed approach. [1] Grounded Text-to-Image Synthesis with Attention Refocusing, Phung et al., CVPR 2024 [2] R&B: Region and Boundary Aware Zero-shot Grounded Text-to-image Generation, Xiao et al., ICLR 2024 Other Strengths And Weaknesses: I have discusses the main points in the above sections. Other Comments Or Suggestions: The example in Fig. 2 may not be the best choice for claiming that it is important for each object to covered the whole bounding box. Even if the rabbit does not fully fit the box, it is likely to satisfy the user's intentions. If there is an example in which the situation or the story in the image may change when the object doesn't fit the box, it could be a better example showing the importance of maximum coverage. Questions For Authors: Please refer to the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to sincerely thank Reviewer Reviewer YGgi for the thorough and constructive feedback on our manuscript. We are more than happy that the reviewer finds our problem definition is reasonabl, theoretical analysis illustrative, and superior performance. We would like to address the concerns as below. ***W1. Non-local attention prior.*** **A1.** We acknowledge that for certain object categories—such as coconut trees *(As discussed in lines 80-84)* and cars —the center of mass may deviate from the geometric center of the bounding box due to their asymmetric or elongated structures, making strict centralization suboptimal. To address this, we adopt a decaying schedule for weight $\rho$ of non-local attention prior *(Lines 225–229)*, which applies stronger centralizing guidance during early denoising and gradually relaxes it as generation progresses. This design allows the model to first stabilize attention and then adapt to the object’s natural shape and pose. *As shown in Fig.4*, our method successfully generates well-aligned objects like cars and astronauts without rigidly adhering to box center. Moreover, our *ablation in Fig.6* shows that maintaining a high prior weight throughout can degrade results, confirming the effectiveness of our decay strategy. In summary, the non-local attention prior functions as a soft, adaptive constraint, enabling both precise spatial control and natural object generation across diverse categories. ***W2. Ablation of Nash-MTL.*** **A2.** Adaptive update is a key component of WinWinLay, designed to balance layout controllability and visual quality with minimal computational overhead. *As shown in Tab.2*, our ablation study demonstrates that incorporating this strategy consistently improves both spatial fidelity and semantic alignment. Moreover, *Fig.7* shows that it reduces the number of update steps by dynamically adjusting optimization directions, leading to faster convergence. We further report actual inference times *in Tab.3*, confirming that adaptive update significantly improves generation efficiency. ***W3. Example in Fig.2.*** **A3.** Adequate object coverage within the bounding box is crucial for precise and reliable user control. In our evaluation, metrics like AP depend on the IoU between generated objects and target boxes—if an object covers only part of the box, IoU drops, leading to lower AP. This not only affects quantitative results but also indicates a mismatch with user-defined spatial intent, particularly in scenes with dense object layouts requiring fine-grained control. ***W4. Compared with R&B.*** **A4.** To clarify the distinctions and advantages of WinWinLay, we provide a focused analysis of the two core components of R&B: Region-aware Loss (RAL) and Boundary-aware Loss (BAL). (1) RAL leverages the IoU between the predicted box $\hat{\mathcal{B}}$ and the ground-truth box $\mathcal{B}$ to guide object placement. However, this mechanism introduces two critical limitations: - The non-differentiability of bounding box operations makes gradient-based updates less reliable and harder to tune. - Similar to traditional attention energy functions, RAL does not mitigate local bias, leading to incomplete or inaccurate coverage within the target region. (2) BAL employs a Sobel filter to enhance attention contrast at object boundaries within the box, with the goal of improving object realism. However, it too adopts an energy function-based formulation, and as such, suffers from similar drawbacks: - Due to the non-differentiability of IoU, BAL reduces to a heuristic energy-based form, which continues to suffer from local bias and incomplete box filling. - BAL does not account for the underlying latent distribution $z_{t}$, and therefore cannot effectively navigate the trade-off between layout adherence and image fidelity. Together, these differences underscore the principled, efficient, and robust nature of WinWinLay in addressing the limitations that R&B is inherently constrained by. We have now included this discussion in revised manuscript. ***W5. More baselines and metrics.*** **A5.** Please see A1 for Reviewer hxyx. ***W6. Details of user study.*** **A6.** To ensure clarity and reproducibility, we conducted the user study on Wenjuanxing, a platform similar to Amazon Mechanical Turk. 150 participants evaluated 50 image pairs, yielding 7500 responses per study. For each pair, users answered two questions: - Which image better matches the bounding box layout? - Which image has higher visual quality? Images were shown side-by-side with layout prompts, and both question order and image positions were randomized to avoid bias. These details have been included in revised manuscript for completeness. Thanks again for the insightful review. We are happy to discuss any aspects of the manuscript that may require further clarification. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for addressing the raised concerns, especially providing quantitative comparisons with R&B and clarifying the technical contributions of WinWinLay. I have updated my recommendation to "Weak Accept". --- Reply to Comment 1.1.1: Comment: We are truly grateful for your thoughtful and constructive feedback, which has been instrumental in improving our work. Thank you again for your time and valuable input throughout the review process :)
Summary: This work aims to achieve high-quality generation for the Layout-to-Image generation task without requiring any training data. It begins by providing a theoretical analysis of existing backward guidance methods and introduces a novel Non-local Attention Energy Function, which enables the model to better respect spatial constraints while maintaining the natural structure of objects. Furthermore, the authors identify that the standard backpropagation update rule can lead to deviations from the pre-trained domain, resulting in visual artifacts. To address this issue, the work proposes a Langevin dynamics-based Adaptive Update scheme, effectively balancing layout adherence and visual realism. Extensive experiments demonstrate the method’s superior performance in both controllability and realism. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes, I have reviewed the entire supplementary material. Relation To Broader Scientific Literature: Layout-to-Image generation has emerged as a prominent research area. This work offers a theoretical analysis of existing backward guidance methods and proposes novel algorithms to address their limitations. Essential References Not Discussed: No Other Strengths And Weaknesses: Strength: - The paper is clearly structured and easy to follow, providing a smooth and coherent flow of ideas. - The analysis presented in this work is both comprehensive and insightful. It provides a deep investigation into the limitations of existing methods and introduces effective solutions that significantly improve generation quality and controllability, including Non-local Attention Energy Function and Langevin dynamics-based Adaptive Update scheme. Together, these innovations lead to more controllable and higher-quality generation outcomes. - Experimental results demonstrate that the proposed method achieves superior performance compared to state-of-the-art techniques, both qualitatively and quantitatively, highlighting its effectiveness. Weakness: - The proposed method is evaluated using Stable Diffusion 1.5, which is relatively outdated. Have the authors considered implementing their approach on more recent models, such as Stable Diffusion 3/3.5 or FLUX? Other Comments Or Suggestions: No. Questions For Authors: The examples shown in the visualizations involve relatively large bounding boxes, which may provide more contextual cues to the model. However, it remains unclear how the method performs with smaller bounding boxes corresponding to small objects (e.g., a mug on a table). Have the authors evaluated the model’s generation quality and spatial controllability under such scenarios? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to sincerely thank Reviewer QbsF for the thorough and constructive feedback on our manuscript. We are more than happy that the reviewer finds our paper is clearly structured, comprehensive and insightful analysis, and superior performance. We would like to address the concerns as below. ***W1. Other base model.*** **A1.** In line with prior works, we adopt SD1.5 as the default base model in our experiments to ensure a fair and direct comparison with existing training-free Layout-to-Image methods. This choice is consistent with the evaluation protocols followed by representative baselines such as BoxDiff, Layout-Control, and CSG. To further demonstrate the generality and scalability of our approach, we also extend WinWinLay to more recent and powerful SDXL model. As shown in the teaser figure and in additional visualizations provided in the Appendix, our method consistently delivers superior performance under the SDXL setting as well—achieving strong controllability and realism. ***W2. Small objects.*** **A2.** In fact, both the COCO2014 and Flickr30K datasets used in our experiments naturally contain a significant number of small object instances, such as mouse, coffee cups, and similar items. Therefore, the quantitative results already reported in Tab.1 and Tab.2 implicitly reflect the model’s effectiveness in handling small bounding boxes. To more directly address the reviewer’s concern, we have conducted an additional focused evaluation specifically targeting small-object scenarios. In this experiment, we filtered the datasets to retain only those bounding boxes that occupy no more than 1/8 of the total image area, following a standard definition of small object regions. As expected, all methods exhibit some performance degradation under these more challenging conditions, due to the intrinsic difficulty of generating fine-grained details in small spatial regions. However, our method consistently outperforms all baselines across multiple metrics in this setting as well, demonstrating its robustness, precise spatial control, and superior fine-detail generation capabilities. Specifically, we observe that when the bounding boxes of multiple objects exhibit unnatural relationships—for instance, when a cat and an avocado are assigned bounding boxes of the same size—most existing algorithms suffer from significant performance degradation. In contrast, our method consistently maintains strong performance under such conditions. Visual results are visible on this [link](https://anonymous.4open.science/r/WinWinLay/Small%20objects/Small%20objects.png). | Model | Multidiffusion | BoxDiff | Layout-Control | CSG | **Ours** | | :--------------------------: | :------------: | :-----: | :------------: | :---: | :-------: | | COCO2014(AP$\uparrow$) | 12.2 | 8.1 | 6.9 | 15.7 | **18.2** | | COCO2014(CLIP-s$\uparrow$) | 0.267 | 0.281 | 0.279 | 0.274 | **0.296** | | Flicker30K(AP$\uparrow$) | 10.7 | 13.3 | 13.6 | 14.1 | **16.4** | | Flicker30K(CLIP-s$\uparrow$) | 0.262 | 0.272 | 0.277 | 0.270 | **0.289** | Thanks again for the insightful review. We are happy to discuss any aspects of the manuscript that may require further clarification. --- Rebuttal Comment 1.1: Comment: Thank the authors for addressing my concerns. I will keep my rating. --- Reply to Comment 1.1.1: Comment: We are truly grateful for your thoughtful and constructive feedback, which has been instrumental in improving our work. Thank you again for your time and valuable input throughout the review process :)
Summary: Layout-to-image generation faces two significant challenges: a.) imprecise object localization and b.) the presence of unrealistic artefacts in the final output. To address these issues, the authors propose Win-Winlay, a novel method that incorporates: 1. **Non-Local Attention Energy Function**: This function refines attention scores across regions within each layout box, improving localization accuracy. 2. **Adaptive Update Scheme**: A Langevin dynamics-based adaptive update scheme that generates more realistic outputs. Claims And Evidence: Yes, the claims are well-supported by thorough evidence in the manuscript. - Figures 2 and 3 clearly illustrate the shortcomings of existing methods. - The ablation studies in Section 5.3 provide strong evidence justifying the key design choices. Methods And Evaluation Criteria: Yes, the proposed method is well-justified and effectively addresses the issues of imprecise localization and unrealistic artefacts in a training-free approach for the Layout-to-Image generation task. The method is evaluated on the COCO2014 and Flickr30 datasets. YOLOv7 is used for object detection, with AP metrics assessing the accuracy of generated objects. Additionally, CLIP-s evaluates alignment with the conditioning text, and a user study further supports the findings. Theoretical Claims: Yes, I checked the correctness of the theoretical claims presented in Theorem 4.1, Corollary 4.2 and other Equations in the paper. All the proofs and equations are correct. Experimental Designs Or Analyses: Yes, the experimental design for ablation studies is sound and is analyzed well in the manuscript. Supplementary Material: I have reviewed all of the content in the supplementary material. Relation To Broader Scientific Literature: The key insight of this paper is that the widely used attention energy function introduces spatial distribution biases, preventing objects from aligning uniformly with layout boxes. To address this, the authors propose a non-local attention prior that redistributes attention scores for better alignment with layout boxes. Hence, it is helpful for broader scientific literature where the attention scores need to be redistributed based on a spatial region. Essential References Not Discussed: All the major works are cited. Although, following work is not discussed: 1. Phung, Q., Ge, S., & Huang, J. B. (2024). Grounded text-to-image synthesis with attention refocusing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7932-7942). This work uses attention refocusing mechanism and spatial layouts for controlled generation. Other Strengths And Weaknesses: **Strengths** - **[S1] Paper Writing**: The paper is well-structured, with clear figures that effectively explain the key ideas. - **[S2] Theoretical Proofs**: The proposed strategies are strongly supported by rigorous theoretical proofs. All the proofs and equations appear to be correct. - **[S3] Necessity of the Problem Statement**: Guided generation without fine-tuning in diffusion models is an important research area. The proposed method addresses significant limitations in existing work, such as imprecise localization. - **[S4] Exhaustive Ablation Studies**: The manuscript thoroughly discusses all key design choices. It provides exhaustive analysis on non-local attention and adaptive updates (Figure 5, Table 2), the effect of $\rho$ (Figure 6), and hyperparameters for adaptive updates (Figure 7). **Weaknesses** - **[W1] Missing Baselines**: The proposed method does not include comparisons with Attention-Refocusing [R1], InstanceDiffusion [R2], and MIGC [R3]. The authors must compare their method with these approaches for complete evaluation. - **[W2] Generation of multiple objects**: Can the proposed method generate images for complex text and layouts, such as "Two apples on a plate" or "Three rabbits wearing sunglasses"? If not, the authors should discuss this limitation in the manuscript. [R1] Phung, Q., Ge, S., & Huang, J. B. (2024). Grounded text-to-image synthesis with attention refocusing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7932-7942). [R2] Wang, X., Darrell, T., Rambhatla, S. S., Girdhar, R., & Misra, I. (2024). Instancediffusion: Instance-level control for image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6232-6242). [R3] Zhou, D., Li, Y., Ma, F., Zhang, X., & Yang, Y. (2024). Migc: Multi-instance generation controller for text-to-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6818-6828). Other Comments Or Suggestions: **Typos**: - L436: "of" is repeated twice in "Hyperparameter of of Adaptive Update". Questions For Authors: - What is the rationale for not comparing the proposed method with Attention-Refocusing [R1], InstanceDiffusion [R2], and MIGC? (Refer to W1 in Strengths and Weaknesses.) - Can the proposed method generate multiple objects of the same category? (Refer to W2 in Strengths and Weaknesses.) If these concerns are addressed, I am willing to increase my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to sincerely thank Reviewer hxyx for the thorough and constructive feedback on our manuscript. We are more than happy that the reviewer finds our paper is well-structured, theoretical analysis illustrative, and exhaustive ablation studies. We would like to address the concerns as below. ***W1. More Baselines.*** **A1.** As our proposed WinWinLay framework operates in a training-free paradigm for Layout-to-Image generation, all methods selected for comparison in our initial experiments,e.g., BoxDiff and CSG, were likewise training-free approaches. In contrast, methods such as [1] and [2] are training-based, requiring supervised learning on paired layout-image data. These methods operate under fundamentally different assumptions and objectives, making a direct comparison less meaningful within the scope of our setting. Regarding [3], we acknowledge its relevance. Due to strict page limitations, we initially prioritized comparisons with widely recognized baselines and the most recent SOTA method, CSG (ECCV 2024). Here, to address the concern and to offer the more comprehensive evaluation, we have now included additional experiments comparing our method with [3] and [4], both of which follow training-free paradigms more aligned with our setting. The extended results demonstrate that WinWinLay consistently outperforms these methods across multiple evaluation metrics, highlighting its superior controllability and visual realism. Visual results are visible on this [link](https://anonymous.4open.science/r/WinWinLay/Compared%20with%20more%20SOTAs/Compared%20with%20more%20SOTAs.png). These findings further substantiate the robustness and effectiveness of our proposed method. | Model(On COCO2014) | AP$\uparrow$ | CLIP-s$\uparrow$ | FID$\downarrow$ | PickScore$\uparrow$[5] | ImageReward$\uparrow$[6] | | :------------------: | :----------: | :--------------: | :-------------: | :-----------------: | :-------------------: | | Layout-Control | 14.19 | 0.288 | 29.83 | 20.39 | 0.7016 | | AttRe[3] | 15.26 | 0.277 | 27.72 | 20.64 | 0.7095 | | R&B[4] | 14.80 | 0.291 | 28.18 | 20.58 | 0.7114 | | CSG | 15.11 | 0.282 | 27.90 | 20.51 | 0.7049 | | Ours | **17.28** | **0.309** | **27.04** | **20.85** | **0.7202** | | Model(On Flicker30K) | AP($\uparrow$) | CLIP-s$\uparrow$ | FID$\downarrow$ | PickScore$\uparrow$[5] | ImageReward$\uparrow$[6] | | :----------------: | :------------: | :--------------: | :-------------: | :-----------------: | :-------------------: | | Layout-Control | 8.42 | 0.310 | 29.79 | 21.09 | 0.7038 | | AttRe[3] | 15.51 | 0.296 | 27.51 | 21.23 | 0.7109 | | R&B[4] | 17.63 | 0.306 | 28.22 | 21.16 | 0.7071 | | CSG | 17.58 | 0.299 | 27.64 | 21.22 | 0.7027 | | Ours | **19.74** | **0.327** | **26.85** | **21.41** | **0.7218** | [1] Instancediffusion: Instance-level control for image generation. CVPR2024. [2] Migc: Multi-instance generation controller for text-to-image synthesis. CVPR2024. [3] Grounded text-to-image synthesis with attention refocusing. CVPR2024. [4] R&B: Region and Boundary Aware Zero-shot Grounded Text-to-image Generation. ICLR2024. [5] Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation. NeurIPS2023 [6] ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation. NeurIPS2023 ***W2. Multiple instances.*** **A2.** We would like to clarify that our method is capable of handling textual prompts and layouts involving multiple instances of the same object. This can be achieved by jointly mapping quantifiers and object names (e.g., "two" + "apples") to multiple bounding boxes, and then enforcing center-based generation constraints within each individual bounding box. Visual results are visible on this [link](https://anonymous.4open.science/r/WinWinLay/Multiple%20instances%20generation/Multiple%20instances%20generation.png) and have been added in the revised version. ***W3. Typos.*** **A3.** We have carefully checked the grammar of the entire manuscript and made corrections. Thanks again for the insightful review. We are happy to discuss any aspects of the manuscript that may require further clarification. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions. All my concerns have been resolved, and I have nothing further to discuss. --- Reply to Comment 1.1.1: Comment: We are truly grateful for your thoughtful and constructive feedback, which has been instrumental in improving our work. We are pleased to hear that your concerns have been fully addressed and would greatly appreciate it if you would consider reflecting this in your final score. Thank you again for your time and valuable input throughout the review process :)
Summary: This paper is a new method in the layout to image domain. Two key problems this paper tries to address: * Imprecise location * Unrealistic artifacts The core contribution of this paper: * A non-local attention energy functino * An adaptive updating mechanism to balance the spatial control and image quality The paper has a well-organized structure: - Basic context, introduction, related work - Problem analysis, theoretical analysis - Method details - Evaluation This structure is easy to follow. And the theoretical analysis part is the most interesting and insightful part. ## Update after rebuttal The rebuttal addressed my concerns. So I increased the rating. Claims And Evidence: The claims have been supported by theoretical anaysis, proof and comprehensive evaluations, including quantitative and qualitative experiments. The overal claim is solid. Methods And Evaluation Criteria: This method follows existing evaluation critera. These results make sense to illustrate the effectiveness of the propoesd method. Theoretical Claims: There are two theoretical claims: - Non-local attention energy function helps overcome the spatial distributino biases - The adaptive update promotes in-domain updating. Both of the theoretical claims look correct. Experimental Designs Or Analyses: - Quantitative - Layout accuracy has been evaluated using AP_50/AP - Image-text alignment - Qualitative - User study - Some qualitative comparing results Supplementary Material: The appendix at the end of this paper. Relation To Broader Scientific Literature: This paper introduces some improvements on the layout to image generation problem. Potentially the idea could also helps other training-free methods in T2I domain, e.g. some training-free image editing tasks. Essential References Not Discussed: This paper has discussed relevant papers. Other Strengths And Weaknesses: Main advantages: - The paper writing is well-organized and easy to follow. - The problems of existing work have been well illustrated in the theoretical analysis sections - The quantitative evaluation results show promising improvement over existing works. There are two main weaknesses of this paper: - The non-local attention prior seems also introduces a new bias: the object seems to be prioritized into the center region. - The adaptive update rule has already been proposed in several places, e.g. (Taming Transformers for High-Resolution Image Synthesis) Other Comments Or Suggestions: - The paper figure uses a lot of vspace, affecting the readability. Questions For Authors: The improved non-local attention energy function is designed to solve the spatial distribution biases problem. However, the non-local attention prior seems also introduces a new bias: the object seems to be prioritized into the center region. This may not be a critical problem. But it seems that it's better included in the limitation or discussion sections. Any other limitations of the proposed method? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to sincerely thank Reviewer z1Kv for the thorough and constructive feedback on our manuscript. We are more than happy that the reviewer finds our writing well-organized, theoretical analysis illustrative, and quantitative improvement promising. We would like to address the concerns as below. ***W1. Object in the center region.*** **A1.** We would like to clarify that generating objects near the center of the bounding box represents a "positive bias", which is a deliberate design choice intended to enhance generation stability and user controllability. Specifically, placing objects at the center of the bounding box offers finer-grained and more predictable spatial control. In contrast, generating objects near the periphery of the box can introduce several undesirable effects: - Objects placed near the boundaries are more susceptible to spatial overflow, often leading to leakage beyond the given region; - Non-central placements inherently introduce positional ambiguity, making it difficult to predict or control which subregion of the box will be emphasized during generation. Such ambiguity increases uncertainty during optimization and often results in instability or inconsistent outcomes. By encouraging central placement within the bounding box, our method achieves a more robust trade-off between layout adherence and generation reliability. This design also enables users to maintain precise control, especially when dealing with dense or complex scenes. To avoid confusion, we have provided additional clarification in both the Method and Limitation sections of the revised manuscript. We emphasize that this center-focused behavior arises from the prior formulated in Equ.13, where the centrality bias is smoothly modulated by the hyperparameter $\rho$ and decayed over generation. This mechanism is not rigid; it merely guides the model toward stable, interpretable spatial alignment without strictly constraining the center of the object to remain fixed at the center *(As discussed in lines 80-84)*. ***W2. Adaptive update rule.*** **A2.** While we acknowledge the use of adaptive updates in prior works such as "Taming Transformers for High-Resolution Image Synthesis", we would like to emphasize that adaptive update rule proposed in our method is conceptually and technically distinct, as elaborated below: - Theoretical Foundation: The adaptive strategies employed in prior works are largely heuristic, lacking a principled formulation. In contrast, our approach is grounded in multi-task optimization theory. Specifically, we formulate the adaptive update as a Nash Bargaining problem between two competing objectives—layout controllability and visual fidelity—leading to a theoretically justified and analytically solvable solution. This formulation offers a deeper understanding of the trade-off and provides a rigorous basis for gradient balancing. - Gradient Computation: To mitigate computational burden, prior works typically approximate gradients by restricting updates to the final layer or a limited subset of parameters. Our method, however, naturally yields full gradients for each task without introducing additional overhead, as both objectives are defined over the same latent variables in the denoising process. This makes the adaptive update mechanism not only theoretically sound but also computationally efficient and seamlessly compatible with our training-free setting. Taken together, while the high-level motivation is shared, the underlying methodology, implementation, and theoretical framing of our adaptive update rule differ substantially. We have further clarified this distinction in the revised manuscript. ***W3. Other Limitaion.*** **A3.** Compared to Text-to-Image methods, our Layout-to-Image approach typically requires more time—for instance, SD1.5 takes 3.17s, while WinWinLay takes 15.01s. Nevertheless, our method remains more efficient than the current SOTA method, CSG, which takes 19.88s. Hence, improving the generation efficiency of our model will be an important focus in future work. ***W4. Use of Vspace.*** **A4.** We have reorganized the content layout in the revised version to ensure improved visual clarity. Thanks again for the insightful review. We are happy to discuss any aspects of the manuscript that may require further clarification. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttals to address my concerns. I do not have further questions. I lean to accept this paper. --- Reply to Comment 1.1.1: Comment: We are truly grateful for your thoughtful and constructive feedback, which has been instrumental in improving our work. Thank you again for your time and valuable input throughout the review process :)
null
null
null
null
null
null
Contextures: Representations from Contexts
Accept (poster)
Summary: The authors propose a framework for understanding representation learning through the concept of contextures, which are the top singular functions of an operator induced by a context variable. The goal is to characterize learned representations across supervised, self-supervised, and manifold learning paradigms. The main contributions are: a unifying theoretical framework for representation learning, a demonstration that scaling model size yields diminishing returns once optimal contextures are approximated, and a task-agnostic metric for evaluating the usefulness of contexts. The study shows that learned representations align with the top singular functions, and the proposed metric correlates well with downstream performance across multiple datasets. Claims And Evidence: The authors claim that contextures unify representation learning by framing learned representations as top singular functions of a context-induced operator. They provide theoretical proofs, empirical validation showing neural networks approximate these functions, and a task-agnostic metric that correlates with downstream performance. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for analyzing representation learning through the contextures framework. The use of singular function decomposition provides a theoretical foundation, and the proposed task-agnostic metric aligns with the goal of evaluating context usefulness. Also, the correlation analysis with the downstream performance is strong. Theoretical Claims: No immediate flaws are evident, and the theoretical claims appear sound. Experimental Designs Or Analyses: 1. The paper evaluates the contextures framework through neural network scaling experiments to test its alignment with theoretical predictions. 2. The authors compare learned representations with the top singular functions of the context-induced operator in order to validate their claims on scaling limits. 3. The authors also evaluated their proposed task-agnostic metric by measuring its correlation with downstream task performance across multiple datasets. 4. The use of canonical correlation analysis and KNN-based metrics provides a structured approach to evaluating representation quality. Supplementary Material: I reviewed the supplementary material. That is the theoretical proofs, experimental details, and extended discussions on contextures. Relation To Broader Scientific Literature: The framework proposed by the authors utilizes singular functions to characterize learned representations, which aligns with SVD techniques used in Principal Component Analysis (PCA) and Latent Semantic Analysis (LSA). For the proposed task-agnostic metric for evaluating context quality, it is conceptually similar to CCA. Essential References Not Discussed: The paper's key contributions are grounded in established concepts in representation learning, particularly spectral methods and feature space learning. Most of the essential references were cited in the paper. Other Strengths And Weaknesses: Strengths: 1. Clear description of background knowledge and motivations needed to understand the proposed representation model. 2. Clear exposition of the proposed method. 3. The authors introduce the contextures framework that provides a unified mathematical perspective on representation learning across multiple paradigms. 4. The analysis on scaling laws and the diminishing returns of increasing model size offers valuable insights for future neural network training strategies. 5. The proposed task-agnostic metric for evaluating context usefulness is a great contribution. Weaknesses: The paper could provide more discussion on the computational complexity and scalability of the proposed methods. Other Comments Or Suggestions: 1. Overall the paper is well-written. 2. Consider discussing potential limitations of the contextures framework in handling highly heterogeneous data distributions. Questions For Authors: 1. What is the computational cost of estimating the top singular functions for large-scale datasets? A discussion on efficiency and scalability would clarify the feasibility of implementing contextures in practical settings. 2. How does the proposed task-agnostic metric compare to existing evaluation methods for representation learning (e.g., probing classifiers, alignment metrics)? A direct comparison would help justify the usefulness of the new metric. 3. How does the contextures framework perform when the context is noisy? Would the learned representations still be useful, or would they degrade in quality? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We are glad that you find our theoretical analysis and the proposed metric valuable. We would like to answer your questions as follows: ## 1. Efficiency For extracting the top singular functions, the time complexity of kernel PCA is the same as eigendecomposition, which is about $O(m^3)$, where $m$ is the number of pretrain samples. The time complexity of extracting the singular functions using a pretraining objective (the deep learning method) depends on the optimizer, and there is a rich body of work on the convergence rate of popular optimizers such as Adam and SGD. Moreover, there exist many deep-learning-based methods that can extract the eigenfunctions, including VICReg, which is used in the experiments, NeuralEF (Deng et al., 2022), and NeuralSVD (Ryu et al., 2024). The complexity of these methods grows linearly with the size of the dataset and does not suffer from the scalability issues of kernel PCA. For instance, for a mini batch of size $B$, NeuralSVD only has complexity $O(B^2 d + d^2 B)$, where $d$ is the dimension of representation. In other words, the deep learning method is scalable and much faster than kernel PCA, though it is not guaranteed to find the exact singular functions because the optimization problem is non-convex. We will add this discussion to the paper. Deng et al. "NeuralEF: Deconstructing kernels by deep neural networks." ICML 2022. Ryu et al. "Operator SVD with neural networks via nested low-rank approximation." ICML 2024. ## 2. Evaluation metric We evaluate a context with a task-agnostic metric because we would like the encoder to be transferable to various tasks. Moreover, foundation models are often applied to tasks they are not designed for, so we might not know all the tasks at pretrain time. On the contrary, a probing classifier only evaluates the context on a specific task, so it cannot reflect the transferability of the encoder. There is a strong connection between our theory and alignment metrics, which we discuss in Section 4.2. Alignment metrics compare two encoders (focusing on their relative relationship), while our metric evaluates the contexture of one context (assessing its intrinsic information-capturing properties). ## 3. Noisy contexts The effect of a *noisy* context depends on how *noisy* is defined. We can think of two possible definitions. 1) The noise reduces the compatibility between the context and the task (Section 4.1). In this case, our analysis shows that the learned representation will have a lower quality. 2) The noise weakens the association between $X$ and $A$. Section 5 shows that a *good* context should have a moderate association. If the association level of the original context is already moderate, then the noise will make it worse. However, if the association level of the original context is strong, then adding noise might even have a positive effect. For example, it is widely observed in self-supervised learning that strong augmentations make the learned representations better. For example, the authors of SimCLR attribute its success largely to the aggressive crop ratio and color distortion it uses. ## 4. Paper updates We added a new limitation paragraph in the conclusion (see below), and a new related work section (please see our response to reviewer [R5n1](https://openreview.net/forum?id=4GZwFPzLgW&noteId=szOgxgY7bm)). We appreciate your feedback and hope that our response answers all your questions. We are happy to address any follow-up questions. # Updated limitation and open problem paragraph (to be added to conclusion) Our analysis has three limitations, which lead to three open problems. First, our analysis focused on the minimizers of the objectives. However, Cohen et al. (2021) showed that deep models trained by popular gradient methods do not find the minimizers, but instead oscillate around the *edge of stability*. The open problem is how this phenomenon affects our results. Second, we did not discuss the impact of the inductive bias of the model architecture, such as the translation invariance of convolutional neural networks. Such inductive biases can affect the context and, therefore, the encoder. We pose how to integrate the effect of these biases into our theory as an open problem. Third, our theory assumes that $P_X$ is fixed. In practice, however, there is always a data distribution shift from upstream to downstream. Refining our theory to handle such distribution shifts is an exciting direction for future work.
Summary: The author proposes a framework that encapsulates common representation learning methods as learning the joint distribution of inputs and contexts. It’s possible to decompose this joint distribution with eigen-decomposition. This decomposition shows that optimal representations are learned via learning the subspace spanned by the top-d singular functions. Thus lots of tasks benefits from learned representation whenever the encoder recovers the span of the top-d singular functions. The authors explain that contexts with either too weak or too strong association with inputs are less effective, and proposes a task-agnostic metric to evaluate context usefulness. Finally, the authors show empirically that the proposed contexture metrics correlate with downstream performance. Claims And Evidence: The observation on optimal representation under learning the contextures is theoretically supported. Methods And Evaluation Criteria: Yes. The authors compare a variety of context types and there are also controlled experiments on the level of association between inputs and contexts. Theoretical Claims: I closely verified section 2. Experimental Designs Or Analyses: Figure 1 & 2 makes sense. Supplementary Material: No. Relation To Broader Scientific Literature: This paper proposes a theoretical framework of contexture which explains why the learned representations are useful for downstream tasks. I see this framework quite generally applicable to a variety of setting as listed in section 2.1 Essential References Not Discussed: Canatar, A., Bordelon, B. & Pehlevan, C. Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks. Nat Commun 12, 2914 (2021). https://doi.org/10.1038/s41467-021-23103-1 - This paper discussed similar ideas of learning via eigenfunctions alignment Other Strengths And Weaknesses: Strength - The contexture framework provides a perspective to understand why representation learning is useful for downstream tasks. For example, it's not clear why the representation learned via self-supervised learning in vision helps reducing generalization errors on downstream supervised tasks. This top-d singular function learning perspective provides an angle for understanding these phenomenon. Weakness - Figure 1 results are a bit weak. It's still unclear to me why does the correlation decrease as width goes up. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. ## 1. Experiments in Section 4.2 (scaling) are updated We changed the embedding dimension from $d=16$ to $d=128$ (which is more common in practice) and reran the experiment. The results are plotted in https://i.postimg.cc/FRncq3Zb/alignment.jpg The main observation is that the alignment becomes much higher (CCA generally $>0.8$ and can be $\approx 0.9$). Also, as the model becomes wider/deeper, the alignment first increases and then decreases. We interpret the results as follows: - **Effect of $d$**: The previously used $d=16$ was too small. For instance, the $16^{th}$ eigenvalue is quite large ($>0.3$), and is close to the $25^{th}$ eigenvalue. Thus, it is hard to distinguish which of the top-$25$ singular functions belong to the top-$16$, which is why the original alignment was low. In contrast, the $128^{th}$ eigenvalue is close to zero, so when $d=128$, the network can learn the top singular functions pretty well. Prior work mostly used $d \ge 128$, such as Kornblith et al., 2019 and Huh et al., 2024, so using $d = 128$ is more reasonable. Moreover, CCA$\approx 0.9$ is considered very high in prior work. - **Alignment trend**: When the model gets larger, the alignment first increases, because the function class of $\Phi$ becomes closer to the entire $L^2$ space, so the optimizer of the pretraining objective is closer to the top eigenfunctions. However, when the model is already large enough, making it larger makes optimization harder. Hence, with the same number of pretraining steps, a larger model will be farther away from the minima, and the alignment decreases. ## 2. Updated related work **Based on the suggestions of the reviewers, we extend the discussion of the present paper's relation to the prior work as follows, which will be moved to the main body in the final version.** # Related Work Understanding the representations learned by an encoder has long been a central topic in machine learning research. Prior work has developed quite different theoretical frameworks for different pretraining methods, but these frameworks are typically not transferable across different settings. This paper provides a unified framework that has a very broad scope, covering learning, SSL based on augmentation, supervised learning, graph representation learning, and more. In the pre-deep-learning-boom era, early work on manifold learning revealed the connection between representation learning and extracting the top eigenfunctions of a kernel (Bengio et al., 2004; Coifman & Lafon, 2006). Moreover, using the eigenvectors of the graph Laplacian as node representations on graphs was a classical technique in graph applications (Belkin & Niyogi, 2002; Zhu et al., 2003). On the theoretical understanding of deep representations, there are two lines of work that are closely related to this paper. The first line studies **representation alignment** (Kornblith et al., 2019; Canatar et al., 2021; Huh et al., 2024; Fumero et al., 2024). Representation similarity has also been studied in neuroscience (Kriegeskorte et al., 2008). These papers mainly focus on comparing two representations. While we aim to evaluate a single representation and the context on which it is trained, we use the tools developed by these papers in our analysis. The second line develops the **spectral theory of self-supervised learning (SSL)**. SSL has achieved remarkable success in recent years (Radford et al., 2019; Chen et al., 2020; Zbontar et al., 2021; Bardes et al., 2022; He et al., 2022; Assran et al., 2023). HaoChen et al. (2021); Johnson et al. (2023) related contrastive learning to the spectrum of the augmentation graph and the positive-pair kernel, and Munkhoeva & Oseledets (2023) related SSL to matrix completion. Later, Zhai et al. (2024) extended the spectral theory to all SSL methods, not just contrastive ones. The present work further extends this theory to representation learning in its most general form, beyond SSL. Besides, other prior theoretical work on SSL studied its training dynamics (Damian et al., 2022; Jing et al., 2022; Tian, 2022) and built its connection to information theory (Achille & Soatto, 2018; Balestriero & LeCun, 2022; Shwartz-Ziv et al., 2023). Another line of work for characterizing the representations aims to learn disentangled or causally associated representations (Higgins et al., 2018; Scholkopf et al., 2021). It is shown that such representations can be provably recovered, provided there is sufficient variability in the environments, for instance by indexing the environments via sufficiently varying auxiliary variables or interventions (Khemakhem et al., 2020; Varıcı et al., 2024; Buchholz et al., 2023; Yao et al., 2024). Some of these results further require stringent parametric assumptions. --- Rebuttal Comment 1.1: Comment: Thank you for the additional results and the newly added related work section. The explanation on width scaling makes sense to me. I think the contexture theory is quite interesting, but theory is only interesting to the extent that it's predictive of empirical performances. I am wondering if your approach can be used as a principle for model selection, or can it be predictive of scaling law. There's been recent attempt at comparing scaling law between CLIP-style visual encoder and DINO-style purely vision ssl based encoder https://arxiv.org/abs/2504.01017. It would be interesting to see if the proposed alignment metric can be predictive of scaling law. This is by no means required given the limited amount of time and compute, but I'm just curious what the authors think about this. I maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for your response! We are glad that you found the contexture theory quite interesting. Indeed, our aim is to build a theory with the explanatory power that can account for many empirical observations, and the predictive power that can provide insights into how to make further progress in pretraining. The reviewer cites a very interesting paper, which shows that an encoder trained on images alone with SSL can be used with LLMs for visual QA tasks, and can even perform better than vision-language models like CLIP. They also observed that the scaling law plateaus more quickly for CLIP than for visual SSL. According to our theory, one possible reason is that the context of CLIP has a weaker association than visual SSL, that is the eigenvalues of CLIP decay faster, making it easier to capture all the eigenfunctions with large eigenvalues. Another related empirical observation was made by Huh et al. (2024), who reported that visual SSL models and vision-language models learn highly aligned representations. These phenomena might be quite baffling because images and text are commonly perceived as fundamentally different. Our theory provides an explanation, that is the visual SSL context (where $A$ is an augmented image) and the vision-language context (where $A$ is a text caption) are in fact highly aligned, in the sense that many of their top eigenfunctions are shared! We will add the above discussions to the paper. They show that the contexture theory can provide insightful explanations for many empirical observations that are otherwise hard to understand. In future work, we will verify these explanations with experiments. We believe that our theory is a valuable tool for choosing the right context (pretraining objective) or training hyperparameters. For example, when choosing the hyperparameters in self-supervised learning such as crop ratio, mask ratio and so on, we can efficiently estimate the spectrum of the context under each combination of hyperparameters using the method described in the paper, and then use our quantitative metric to choose the one with the best decay rate (neither too fast nor too slow). This allows us to avoid training a large model for every possible hyperparameter combination. (Huh et al., 2024) Position: The platonic representation hypothesis, ICML 2024.
Summary: The manuscript presents a framework called contextures, showing that many representation learning methods aim to capture the top singular functions of an operator defined by the relationship between inputs and a context variable. It shows that such representations are optimal for tasks that align with the context and that further improvements require better context design rather than just scaling up model size. ## update after rebuttal I have increased the score to Weak Accept from Weak Reject, primarily due to additional experiments on MNIST and CIFAR-10. Claims And Evidence: Some practical claims, particularly the implications for neural scaling laws and the benefits of designing better contexts, are less extensively validated. These claims are based on experiments with a limited set of small datasets. Methods And Evaluation Criteria: The experimental validation relies on relatively small and simple datasets (up to 21k samples). Theoretical Claims: Theoretical claims are sound. Additional experimentation on these assumptions and their implications in real-world scenarios can enhance the work. Experimental Designs Or Analyses: The experiments rely on relatively simple datasets (up to 21k samples) again. Supplementary Material: Mostly proofs and context evaluation. Relation To Broader Scientific Literature: The paper builds on SSL representation learning methods. It frames many self-supervised learning (SSL) approaches as the recovery of the top singular functions of a context-induced operator with the focus on extracting dominant eigenfunctions. This perspective unifies diverse SSL methods (contrastive, non-contrastive, masked autoencoders) in the way of learning low-dimensional representations. Essential References Not Discussed: For example, recent unification frameworks in self-supervised learning provide an important context that is not discussed. One work by Balestriero and LeCun (2022) shows that many SSL methods—both contrastive (e.g., SimCLR) and non-contrastive (e.g., Barlow Twins, VICReg)—can be seen as recovering top eigenfunctions of certain operators, akin to classical spectral methods like Laplacian Eigenmaps or PCA. Balestriero, Randall, and Yann LeCun. "Contrastive and non-contrastive self-supervised learning recover global and local spectral embedding methods." Advances in Neural Information Processing Systems 35 (2022): 26671-26685. Similarly, Munkhoeva et al. (2023) interpret SSL losses as implicitly performing Laplacian-based eigenfunction learning under data augmentations. Munkhoeva, Marina, and Ivan Oseledets. "Neural harmonics: Bridging spectral embedding and matrix completion in self-supervised learning." Advances in Neural Information Processing Systems 36 (2023): 60712-60723. The paper misses several methods, such as GPT, I-JEPA, data2vec 2.0, and DINO v2, that have significantly advanced self-supervised learning in language and vision. Other Strengths And Weaknesses: - While the manuscript attempts to unify many approaches across different domains, it does not perform experiments on real datasets or, analyses, for example, with discussed SSL models. The experiments were performed on very small datasets with a sample size of no more than 21613 samples. There is a lack of application of this theory to more common datasets like ImageNet, or at least CIFAR10 and STL10. - In Figure 1, even if we train models with different depths and widths, it is not clear how useful the representation will be with respect to correlations. - The classical canonical correlation analysis (CCA) has been shown not to work well with neural network representations (Kornblith et al., 2019). - There are only vague details on how the models were trained. It is hard to reproduce. Kornblith, Simon, et al. "Similarity of neural network representations revisited." International conference on machine learning. PMLR, 2019. Other Comments Or Suggestions: The manuscript is trying to cover too many topics and ideas, specifically neural collapse and neural scaling laws, which should be discussed in separate publications. Then, you will have space to experiment on SSL and natural image datasets. Questions For Authors: No additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We address your concerns as follows: ## 1. Our results are applicable to larger datasets **We conduct more experiments on larger datasets such as MNIST and CIFAR-10.** We initially used datasets with $<$ 30K samples (largest: 28,155; see App. F) since most experiments compare the pretrained $\Phi$ with kernel PCA, which does not scale well to large datasets. However, our theoretical results are independent of dataset size, and our metric is applicable more broadly. Here we show it on MNIST and CIFAR-10. On MNIST, we use LeNet as $\Phi$ and random cropping with various crop ratios as contexts. Spectra of these contexts are estimated using the method in lines 356–368. For instance, see the estimated spectrum for crop ratio of $0.3$: https://i.postimg.cc/bYQP1N46/mnist1.jpg. We plot the prediction error of a linear probe on $\Phi$ (blue) alongside $\tau_d$ (orange, scaled) at crop ratio 0.3 here: https://i.postimg.cc/3rg4SGkm/mnist3.jpg, which shows that $\tau_d$ tracks the actual error across different $d$. We further vary the crop ratio from 0.1 to 0.9 and compare $\tau$ with the prediction error of a linear probe trained on $\Phi$. The result (https://i.postimg.cc/2yzTMwGX/mnist2.png) shows a strong correlation. On CIFAR-10, we use ResNet-18 with SimCLR augmentation as the context. See the estimated spectrum at https://i.postimg.cc/hGTcDLW0/cifar1.jpg. In https://i.postimg.cc/ZqQcWJpL/cifar2.png, we plot the prediction error of a linear probe trained on $\Phi$ (blue) with $\tau_d$ (orange, scaled to fit the image). Again, $\tau_d$ tracks the actual error, supporting its usefulness in choosing dimension $d$. We will add these experiments to the paper. ## 2. The benefits of designing better contexts We proved that for a fixed context, making the model larger has diminishing returns, so it follows as a natural logical consequence that better contexts are necessary for further improvement. Thus, we only intended our experiments to provide a simple empirical illustration rather than corroboration. We also note the increasing empirical evidence in the literature: many recent advancements in pretraining are due to better contexts. For example, as the authors of SimCLR noted, its success owed greatly to its aggressive crop ratio and color distortion. ## 3. The usefulness of representations The experiment in Section 4.2 aims to show that as the model gets larger, $\Phi$ becomes more aligned with the top-d eigenfunctions. However, a better alignment does not necessarily imply better performance at a downstream task. Whether $\Phi$ is good on a particular task depends on the compatibility between the context and the task, which we prove in Section 4.1. We also improved this experiment. See point 1 in our response to Reviewer [R5n1](https://openreview.net/forum?id=4GZwFPzLgW&noteId=szOgxgY7bm). ## 4. CCA versus CKA We use CCA but not CKA because our setting is different from that in Kornblith et al. In their setting, they want the alignment metric to be invariant under orthogonal transformations, but not under other invertible linear transformations on $\Phi$. That's why they used CKA. In contrast, we want the metric to be invariant under invertible linear transformations on $\Phi$ because they do not affect the performance of the downstream linear probe, so we use CCA. Moreover, CCA is equivalent to linear CKA if both representations are centered and whitened. We also evaluate the CKA when the representation is $[s_1 \mu_1, \cdots, s_d \mu_d]$, that is each singular function is multiplied by the singular value. We run the experiment in Section 4.2 with a fixed width 512 and varied depths. The result is plotted at https://i.postimg.cc/Jzrbc00M/Picture1.png. The plot shows that the trends of CCA and CKA are very similar. ## 5. Reproducibility We will add details to the appendix. We also kindly point out that we have attached our code in the supplementary material, and all experiments are run with fixed random seeds, so all results can be exactly reproduced. ## 6. Neural collapse and scaling laws Our remarks show that our theory can explain many empirical phenomena and cover different ML paradigms. We believe that including these remarks is important since they show that our theory has real explanatory power. These should be viewed similar to brief remarks in a later section such as conclusions where such discussions are common. We are happy to move these to a later section if the reviewers see fit. ## 7. Paper updates **We have updated the related work section.** See our response to Reviewer [R5n1](https://openreview.net/forum?id=4GZwFPzLgW&noteId=szOgxgY7bm). We appreciate your feedback and hope that our response answers all your questions. We are happy to answer any follow-up questions. --- Rebuttal Comment 1.1: Comment: Thank you for addressing some of the questions. I have decided to increase the score to Weak Accept, primarily due to additional experiments on MNIST and CIFAR-10. --- Reply to Comment 1.1.1: Comment: Thank you for your response! We are glad that our rebuttal addresses your questions.
Summary: The paper introduces a theoretical framework to characterize representations learned by a neural network is: each data sample is represented by a variable and a context variable (e.g. labels in classification settings, augmentations in self supervised learning, or its K nearest neighbors). The authors proposed to consider an operator, denoted as contexture, induced by data and its context, which is defined as the average. They show that certain classes of representation learning algorithms, such as supervised classification of self supervised learning, can be phrased in the presented theory. In particular they claim that most algorithm aim at extracting the largest singular components of the contexture operator. In order to define the performance , they evaluate if the performance of an average representation over the context matched the one of the sample. They propose a metric, based on computing the first d singular values of the contexture operator, to estimate how downstream performance without having direct access to the downstream task itself. Depending on the relation of the context with the downstream, tasks (i.e. how much one can solve the downstream task from the context) the metric should predict better performance. The validate the theory on a collection of small scale datasets from the OpenML repository. Claims And Evidence: The paper claims the following contributions: - Theory and formalization looks generally good supporting the following claims (see below), although is not completely clear to me how much this correspond to practical settings and how they related with previous work in detail (see related work section). - (i) *representation learning captures the relationship between the inputs and a context variable* supported by formalization in Section 3 and 4, although it should be related to previous works (e.g. [8,4,5]). - (ii) *contexture theory encompasses a large variety of learning paradigms* supported by corresponding proofs in Section 3. - (iii) *Optimal representations (representation minimizing the loss) are the one in which the contexture is learned* supported by corresponding proofs in section 4. - (iv) *Task and context association dependance*: Supported by section 5.1. How much this is related to the encoder considered? (see related questions in the questions section). Points (i) and (iii) could be related to causal representation learning and disentanglement point of view on topicality of representations see e.g. [1,2,3] - Experimental claims are less obvious and convincing. see the experiments section of the review for more details : - *the main role of scaling up the model size is bringing the learned representations close to the top singular values of contexture operator*: the experiments provided are small scale and also the evidence is sometimes weak and motivation for this are not discussed in depth (for example not very high correlation scores in Figure 1 are attributed to non convex optimization, without providing deeper analysis nor explanation of the reason why this could be the case) - *Reflecting current status of deep learning models*: while the theory is interesting and valuable, there is not direct evidence of reflecting the current status of deep learning models: Experiments are performed on small scale dataset which are very far even from simple benchmark dataset as MNIST (60k samples). - Metric: Is not clear from the experiments for the proposed metric should be compared and contextualized with respect to previous work, e.g. [2,3] Methods And Evaluation Criteria: - The use of the datasets for empirical evaluation is not justified in the paper, in particular given that all datasets are very small scale and don't reflect the current status of deep learning datasets and models. Theoretical Claims: - I didn't check the correctness of proofs in detail, but the formalization looks good at a first glance. Experimental Designs Or Analyses: ## Weaknesses: - All experiments are performed on small scale datasets (of the order of few hundreds to 20k samples), not reflecting the current status of deep learning models trained on dataset at much larger scales. - Figure 1: The trend of the correlation plot is not explained in depth: what is the motivation why the correlation decreases at first? What support that the reason of the correlation is always below 0.5: is it hard to disentangle what is affected by the optimization from what the authors want to show here from the theory. It should be verified that training with different seeds attain low variance results. One strategy could be to show for synthetic data that the correlation trend I alway increasing if it expected so. - Figure 2 plots in the second row are not very readable: a correlation plot like the ones in Figure 3, between errors and \tau measure would allow to understand better the association level. - Results in Figure 3 and Table 1. Although failure cases are highlighted, not a deep analysis or explanation is provided concerning why on certain dataset the metric fails to predict the downstream error. Supplementary Material: Reviewed related work, code to inspect details of neural architecture used, and part of the formalization/proofs. Relation To Broader Scientific Literature: The related work section in the Appendix is quite short and not comprehensive of many works. A more comprehensive discussion with what has been proved in (Zhai 2024) and how has been extended should be included, in order to understand better the contribution of the paper. Some additional work would need to be discussed. Some examples: [1], in which it is proposed map between operators eigenspaces defined on graphs in latent spaces of arbitrary neural models. [2,3] Discuss representation quality measuring the decay of the eigenspectrum of the covariance operator in representation space. In [6] is introduced relative representation where each data point is represented as function of other points in representations space ( close to the context variable idea) and it's showed that this representation is universal across models, architectures, and training algorithms. Before in [7] was introduced the idea of using second order information to compare representations. It would be interesting to relate the unification perspective proposed by contextures with the one analyze in the causal representation learning setting and disentanglement [4,5,8]. _[1] Fumero, M., Pegoraro, M., Maiorca, V., Locatello, F., & Rodolà, E. (2024). Latent Functional Maps: a spectral framework for representation alignment, NeurIPS 2024_ _[2] Agrawal, Kumar K., et al. "$\alpha $-ReQ: Assessing Representation Quality in Self-Supervised Learning by measuring eigenspectrum decay." Advances in Neural Information Processing Systems 35 (2022)_ _[3] Nassar, Josue, et al. "On 1/n neural representation and robustness." Advances in neural information processing systems 33 (2020)_ _[4] Yao, Dingling, et al. "Unifying Causal Representation Learning with the Invariance Principle." arXiv preprint arXiv:2409.02772 (2024)_ _[5] Zimmermann, Roland S., et al. "Contrastive learning inverts the data generating process." International conference on machine learning. PMLR, 2021._ _[6] Moschella, Luca, et al. "Relative representations enable zero-shot latent space communication, ICLR 2023_ _[7] Kriegeskorte, Nikolaus, Marieke Mur, and Peter A. Bandettini. "Representational similarity analysis-connecting the branches of systems neuroscience." Frontiers in systems neuroscience 2 (2008)_ _[8] Achille, Alessandro, and Stefano Soatto. "Emergence of invariance and disentanglement in deep representations." Journal of Machine Learning Research 19.50 (2018)_ Essential References Not Discussed: As discussed in the previous paragraph more comprehensive literature review would be needed. Some examples reported below and discussed in the previous section: _[1] Fumero, M., Pegoraro, M., Maiorca, V., Locatello, F., & Rodolà, E. (2024). Latent Functional Maps: a spectral framework for representation alignment, NeurIPS 2024_ _[2] Agrawal, Kumar K., et al. "$\alpha $-ReQ: Assessing Representation Quality in Self-Supervised Learning by measuring eigenspectrum decay." Advances in Neural Information Processing Systems 35 (2022)_ _[3] Nassar, Josue, et al. "On 1/n neural representation and robustness." Advances in neural information processing systems 33 (2020)_ _[4] Yao, Dingling, et al. "Unifying Causal Representation Learning with the Invariance Principle." arXiv preprint arXiv:2409.02772 (2024)_ _[5] Zimmermann, Roland S., et al. "Contrastive learning inverts the data generating process." International conference on machine learning. PMLR, 2021._ _[6] Moschella, Luca, et al. "Relative representations enable zero-shot latent space communication, ICLR 2023_ _[7] Kriegeskorte, Nikolaus, Marieke Mur, and Peter A. Bandettini. "Representational similarity analysis-connecting the branches of systems neuroscience." Frontiers in systems neuroscience 2 (2008)_ _[8] Achille, Alessandro, and Stefano Soatto. "Emergence of invariance and disentanglement in deep representations." Journal of Machine Learning Research 19.50 (2018)_ Other Strengths And Weaknesses: ### Strengths - **significance**: The paper targets a very important goal of considering different learning algorithms under a unified framework and how to evaluate quality of representation learned by deep learning algorithms. - **broad theory** the theory proposed is broad and applies to many setting of deep learning and representations. - **failure cases** : The demonstration of failure cases for the proposed metric in Figure 3 (d) and (e) is useful and appreciated. ### Weaknesses - **Clarity**: parts of the text could be improved in clarity: for example, in the section introducing contextures, integral operators and kernels, practical examples accompanying this would be helpful to improve the reader's understanding. - **Related work**: Related work section should be improved. See specific section. - **Experimental evidence**: validation in experimental setting of the theory proposed is not very strong and far from models and dataset used in practice (see experimental section). Other Comments Or Suggestions: I spotted the following typos: - line 131: "The kernels captures" -> "The kernels capture" Questions For Authors: - Why considering just Kernel PCA and VICReg as encoders? How much the contexture operator depends on this choices? - In the experiments in Figure 2 and Table 1: Do the learned encoders solve the task? I.e. the kernel pac embedding are sufficient to solve the regression and classification tasks on the test set? - How much the context variable depends on which encoder one learns? I.e. for a given data context variable one could optimize for an encoder that controls the level of association despite solving the training task. - In the setting of supervised classification what is $g$ in practice? Can be just a selection operator or It is an encoder for labels? - Figure 2: why the \tau scores are divided by 5? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We're glad that you find our theory generally good, interesting and valuable. We address your concerns below: ## 1. Experiments in Section 4.2 (scaling) updated **We improved the setting and observed much higher alignments.** Due to the 5000-character limit, please see point 1 in our response to Reviewer [R5n1](https://openreview.net/forum?id=4GZwFPzLgW&noteId=szOgxgY7bm). All experiments use 15 different random seeds. The standard deviation plot (of one experiment) at https://i.postimg.cc/2651NYg4/Picture3.png shows low variation. ## 2. Our results are applicable to larger datasets **We added experiments on larger datasets such as MNIST and CIFAR-10.** See point 1 in our response to Reviewer [RMLf](https://openreview.net/forum?id=4GZwFPzLgW&noteId=NtaiQsXUZU). ## 3. Figure 2 This figure illustrates how the association between $X$ and $A$ affects the spectrum’s decay rate, and how $\tau_d$ tracks the prediction error of a $d$-dimensional encoder. We divide $\tau$ by 5 for visualization. We’ll enlarge the figures to improve readability. ## 4. Metrics in [2, 3] do not fit our problem There is a fundamental difference between our metric $\tau$ and those in [2, 3]. They use the eigenvalues $\lambda_i$ of $\Phi \Phi^{\top}$: $\langle \Phi, \Phi \rangle f_i = \lambda_i f_i$. In contrast, our theory is based on the spectrum of the context: $\langle \Phi, T_{P^+}\Phi\rangle f_i=s_i^2 \langle\Phi, \Phi\rangle f_i$. These spectra are fundamentally different. $s_i^2$ is invariant under invertible linear transformations on $\Phi$, whereas $\lambda_i$ is not. Since such transformations don’t affect downstream linear probe performance (weights can be adjusted accordingly), metrics in [2, 3] are not suitable for our setting. ## 5. Elaboration on the two failure cases of our metric Both failure cases stem from the fact that the metric depends only on the spectrum (singular values) of the context. - Case 1: The association of the context is too weak/strong. Our metric will indicate that the context is bad, but it is still possible that for a particular task, the context is good. - Case 2: The metric might not be able to compare different types of contexts. It may indicate similarity, though one context may perform well on a specific task while the other does not. However, the experiment in Sec.5 shows that such failure cases are rare. ## 6. Why we use kernel PCA and VICReg Kernel PCA is the standard way to extract the exact top eigenfunctions. It is not scalable, and we compare deep learning based methods with it. We only use VICReg in the experiments because, as shown in Section 3, many objectives such as spectral contrastive loss and masked autoencoders, are equivalent to VICReg in that they are all minimized by the top-d singular functions. To demonstrate this, we run the experiment in Sec.4.1 with both VICReg and spectral contrastive loss (with width 512, varied depths). Results at https://i.postimg.cc/Hncjj2Yn/Picture2.png show very similar performance, differing slightly due to optimization. Hence, we use a single method in experiments, given their equivalence. ## 7. Whether the methods "solve the task" No encoder solves all tasks. While we proved that many pretraining objectives learn the contexture, they are not guaranteed to solve arbitrary tasks. In Sec. 4.1, we proved that the encoder’s performance depends on the compatibility between the task and the context. If they are compatible, then an encoder that learns the contexture is guaranteed to succeed. ## 8. Whether the context variable depends on the encoder In general, the context is defined prior to training the encoder, so it does not depend on the encoder. There are two ways to encode additional context in the encoder: - The inductive bias of the architecture (such as translation invariance of CNNs) - "Smoothed" encoders, such as adding Gaussian noise to the input The first method changes the model and the second method changes the input. In both cases, adding additional context weakens the association, hence implicitly changes the level of association. Our theory does not analyze the effect of the inductive bias, and we pose it as an open problem. ## 9. What is $g$ in supervised learning? In supervised learning, $\Phi$ extracts the eigenfunctions of $T_{P^+}\Lambda T_{P^+}^*$, so $g(a) = (T_{P^+}^* \Phi)(a) = E[\Phi(X) | \text{class } a]$. By neural collapse, the representations of samples from the same class $a$ collapse to a cluster, and $g(a)$ is the center of that cluster. ## 10. Paper updates **We have updated the related work section.** See our response to Reviewer [R5n1](https://openreview.net/forum?id=4GZwFPzLgW&noteId=szOgxgY7bm). We also added a new limitation paragraph. See our response to Reviewer [rV84](https://openreview.net/forum?id=4GZwFPzLgW&noteId=25ftscT8C3). We appreciate your feedback and hope that we've answered all your questions. We are happy to address any remaining concerns.
null
null
null
null
null
null
Statistical Test for Feature Selection Pipelines by Selective Inference
Accept (oral)
Summary: This paper presents a statistical test to assess the significance of data analysis pipelines, which transform raw data by integrating various analysis algorithms. The paper specifically focuses on feature selection pipelines for linear models, which are composed by value imputation algorithms, outlier detection algorithms, and feature selection algorithms. The proposed statistical test builds upon the technique of selective inference, and it is theoretically shown that it can control the probability of false positive feature selection at any desired level. Experiments on synthetic and real data are provided to demonstrate the validity and effectiveness of the proposed statistical test. ## update after rebuttal ## The rebuttal from the authors has supported my initial recommendation to accept the paper. Claims And Evidence: The paper provides convincing theoretical and experimental results to support the validity and effectiveness of the proposed statistical test. Methods And Evaluation Criteria: The paper uses both synthetic and real-wold datasets to assess the performance of the statistical test. In addition, it considers three baselines, a version of the proposed test without parametric programming, a classical z-test, and the Bonferroni correction. In general, the proposed methods and evaluation criteria make sense for the problem at hand. Since the focus of the paper seems to be on feature selection pipelines for the moment, would not it make sense to compare with any of the baselines below? - Knockoff-based feature selection - Meinshausen, Nicolai, and Peter Bühlmann. "Stability selection." Journal of the Royal Statistical Society Series B: Statistical Methodology 72, no. 4 (2010): 417-473. Theoretical Claims: The proofs of the theoretical claims in the paper seem correct. Experimental Designs Or Analyses: Overall, the experimental design seems correct. However, there are certain details that I am unclear about: 1) The probability of a missing value was set to 0.03, which seems quite low. Is it possible that selecting such value to be low can somehow affect the performance of the statistical test? 2) I am not entirely sure I understood the real data experiments, and more specifically, why it is necessary to generate random datasets from each original dataset to illustrate the performance of the proposed test? 3) The real-world datasets considered have very few features (<15), so I am wondering how the proposed test will behave in settings with larger number of features? Supplementary Material: I reviewed all the parts of the supplementary material. I particularly appreciate the experiments on the computational time of the proposed method. Relation To Broader Scientific Literature: The proposed statistical test builds upon the selective inference statistical technique to provide a new way of assessing the significance of data analysis pipelines. The specific focus is on feature selection pipelines for linear models. Essential References Not Discussed: The paper presents a summary of the most related works in selective inference and AutoML literature. Other Strengths And Weaknesses: Strengths: (+) Well written paper that builds upon selective inference to assess statistical significance of data analysis pipelines. (+) Includes experiments with both synthetic and real-world data (+) The figures provided facilitate the understanding of the proposed approach. Weaknesses: (-) Some parts of the experimental study are unclear. Other Comments Or Suggestions: - Line 186: remove "is" from "Making the p-value is a random variable" - Line 246: replace "can" with "are" in "Note that DAGs can.." - Line 310: replace "currently" with "current" - Line 312: replace "details" with "details" - The sentence "Missing values .. of 0.03" is repeated twice in the Experimental Setup section. - In Section 6, "Methods for Comparison", for op1 and op2, I believe that the reference should be to Figure 1, not Figure 2. Questions For Authors: I have already stated these questions, but I restate them below: 1) The probability of a missing value was set to 0.03, which seems quite low. How (or better does) such value can affect the performance of the statistical test? 2) I am not entirely sure I understood the real data experiments, and more specifically, why it is necessary to generate random datasets from each original dataset to illustrate the performance of the proposed test? 3) The real-world datasets considered have very few features (<15), so I am wondering how the proposed test will behave in settings with larger number of features. 4) Since the focus of the paper seems to be on feature selection pipelines for the moment, would not it make sense to compare with any of the baselines below? - Knockoff-based feature selection - Meinshausen, Nicolai, and Peter Bühlmann. "Stability selection." Journal of the Royal Statistical Society Series B: Statistical Methodology 72, no. 4 (2010): 417-473. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for your feedback. > The probability of a missing value was set to 0.03, which seems quite low. How (or better does) such value can affect the performance of the statistical test? This probability was set only for experimental convenience. The validity of the proposed method, specifically its control of the Type I Error Rate, is unaffected by how missing values are present in the response vector. However, while a higher proportion of missing values naturally leads to a decrease in statistical power, the proposed method maintains its advantage over the comparison methods. > I am not entirely sure I understood the real data experiments, and more specifically, why it is necessary to generate random datasets from each original dataset to illustrate the performance of the proposed test? We need multiple datasets to evaluate the performances of statistical tests (type I error rate and power). Therefore, for experiments on real datasets, we created multiple datasets by randomly resampling the original dataset. Such an evaluation is a standard approach for assessing statistical tests on real datasets. > The real-world datasets considered have very few features (<15), so I am wondering how the proposed test will behave in settings with larger number of features. The validity of the proposed method and its superior statistical power compared to baseline methods remain consistent even as the number of features increases. Please refer to the results of the synthetic data experiments on high-dimensional data provided in the appendix (see Figure 5 in Appendix D). > Since the focus of the paper seems to be on feature selection pipelines for the moment, would not it make sense to compare with any of the baselines below? > - Knockoff-based feature selection > - Meinshausen, Nicolai, and Peter Bühlmann. "Stability selection." Journal of the Royal Statistical Society Series B: Statistical Methodology 72, no. 4 (2010): 417-473. Methods such as Knockoff-based feature selection and Stability Selection are designed to control the False Discovery Rate (FDR), defined as the expected proportion of falsely selected features within the selected set. In contrast, our proposed method performs hypothesis testing on individual features within a selected feature set. Because they address different inferential goals (set-level FDR vs. individual feature significance), the methods are not directly comparable. --- Rebuttal Comment 1.1: Comment: Thank you for the responses provided to my questions. I have some follow up clarifications: 1. I appreciate the reply regarding the probability of missing value. However, I argue that there are no results in the paper that suggest that the proposed method maintains its advantage over the comparison methods when this value is increased. Adding even a small experiment in the camera-ready version can strengthen the statement you provided. 2. As a future recommendation, a study that involves a large number of real-world datasets with high number of features could be beneficial to support the importance of the particular statistical test. 3. My question regarding the low number of features was specifically for real-world datasets, not synthetic ones, since the former ones include variability in conditions, not necessarily present in synthetic datasets. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for your further feedback. > I appreciate the reply regarding the probability of missing value. However, I argue that there are no results in the paper that suggest that the proposed method maintains its advantage over the comparison methods when this value is increased. Adding even a small experiment in the camera-ready version can strengthen the statement you provided. Thank you for your suggestion. Following the reviewer's recommendation, we conducted experiments comparing statistical power when varying probabilities of missing values. The results, presented in the table below, confirm the advantage of our proposed method even when the probability of missing values increases (the experimental setup is described later). We will add these results to the revised manuscript. | Probability of Missing Values | 0.03 | 0.12 | 0.21 | 0.30 | |---|---|---|---|---| | proposed | **0.487** | **0.379** | **0.361** | **0.335** | | w/o-pp | 0.064 | 0.055 | 0.052 | 0.057 | | bonferroni | 0.000 | 0.000 | 0.000 | 0.000 | > As a future recommendation, a study that involves a large number of real-world datasets with high number of features could be beneficial to support the importance of the particular statistical test. > My question regarding the low number of features was specifically for real-world datasets, not synthetic ones, since the former ones include variability in conditions, not necessarily present in synthetic datasets. Thank you for your suggestion; we had initially misinterpreted your feedback. We also plan to apply our testing framework to more practical high-dimensional data analysis tasks, such as gene expression data analysis, in the future. --- * The experimental setup for the above table is as follows: - We used the `cv` pipeline for the experiments. - We set the number of samples $n$ to $200$, the number of features $d$ to $20$, and the true coefficient $\Delta$ to $0.4$. - We changed the probability of missing values from $\\{0.03, 0.12, 0.21, 0.30\\}$. - We generated 10,000 datasets in the same way as in the main experiments (Section 6).
Summary: The authors propose an extension of selective inference techniques from single procedures (lasso, marginal screening) to pipelines. They develop a statistical test that they claim to have better power for data analysis pipelines with multiple, data-adaptive decision points. Claims And Evidence: The claims of the work are clearly stated and supported mostly by theorems. The data-based evidence relies on a few somewhat out-dated datasets, but is not central to the contribution of the paper. The authors appropriately cast it as having somewhat limited applicability (linear models with normal errors, specific feature selection steps), but with an eye towards generalizing to autoML pipelines, which is a good goal. Methods And Evaluation Criteria: Yes, the proposed methods make sense. The evaluation criteria is also sensible. I like that the authors focused on power since a method like this may be prone to over-conditioning and power loss. Theoretical Claims: I checked proofs 3.1, 3.2, and 4.1, although I could have missed some details. Each of them seems clear. - 3.1 uses a standard selective inference trick to transform $Y$ and obtain the truncated normal distribution $T(Y)$ - 3.2 is similarly aligned with selective inference theory to establish the uniformity of the p value - 4.1 uses some parametric programming arguments to define when selected features and detected outliers remain unchanged (i.e. when the update rules do not trigger a change) The proofs appear to correctly apply techniques seen in related works (Lee et al 2016, Tibshirani et al 2016). Experimental Designs Or Analyses: As noted above, the only caveat with the experiments are that the datasets are somewhat out-dated. It is kind of an ancillary point to the main value of the paper. Supplementary Material: I looked at the proofs in the supplement. Relation To Broader Scientific Literature: The paper is related to selective inference literature, and extends it from single procedures to pipelines. The paper seems clear about its contribution in this regard. Essential References Not Discussed: None noted Other Strengths And Weaknesses: The paper is clearly written and well reasoned. The value proposition is clear. To me it appears to be a good extension of prior work. The main weaknesses I see are: 1. It is not clear how well this procedure could truly generalize, if at all, to more complicated ML models or pipelines. The strong assumptions that are made about error normality and a fixed design matrix seem to be quite central to the validity of the statistic. 2. Line search is computationally expensive, as evidenced in the appendix fig 6, and for these simple datasets is already quite slow. Other Comments Or Suggestions: None Questions For Authors: Could the authors say more about how this prototype is expected to be extendable to more complicated pipelines and datasets? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for your feedback. > It is not clear how well this procedure could truly generalize, if at all, to more complicated ML models or pipelines. The strong assumptions that are made about error normality and a fixed design matrix seem to be quite central to the validity of the statistic. > Could the authors say more about how this prototype is expected to be extendable to more complicated pipelines and datasets? Conditional selective inference was initially developed targeting the problem of feature selection in linear models, but it has recently been applied to various problems such as clustering and anomaly detection. In this paper, as a proof of concept, we consider a feature selection pipeline for linear models, but we believe that the same concept can be applied to a wide range of other problems as well. All the analysis components considered in this paper are those for which the selection events can be characterized by sets of linear or quadratic inequalities. In addition to the nine specific analysis components in the paper, there are many other analysis components whose selection events can be characterized within this framework. Technicaly, it is easy to incorporate them as additional components of our current pipeline framework. > Line search is computationally expensive, as evidenced in the appendix fig 6, and for these simple datasets is already quite slow. Although the line search is computationally expensive, we expect that parallelization offers potential for significant time reduction. The vertical axis in Figure 6 shows the computation time needed for a single hypothesis test (i.e., calculate one p-value) on a single CPU core. Our preliminary experiments indicate that using 16 cores reduces this time to roughly one-fifth. We will add an explanation of this in the revised manuscript.
Summary: This paper presents a statistical testing framework based on selective inference (SI) for assessing the significance of features selected through full feature selection pipelines. These pipelines may include steps such as missing value imputation, outlier detection, and feature selection. The key idea is to compute valid p-values by conditioning on the fact that the features were selected through a specific sequence of data-dependent operations. The proposed framework is modular and can accommodate a range of commonly used pipeline components. Empirical results on both synthetic and real datasets show that the method effectively controls Type I error and reduces false discoveries compared to standard, naive testing approaches—particularly when selection is driven by the data. Claims And Evidence: Yes. The claims are supported by both theoretical arguments and empirical evidence. The experimental section demonstrates that naive approaches can yield misleading significance results, while the proposed method performs consistently under various setups. Methods And Evaluation Criteria: Yes. The methods and evaluation criteria are appropriate for the problem. The use of synthetic data for controlled studies and real datasets for practical illustration is well-justified. Theoretical Claims: I had a quick look at the proof of Theorem 3.1, which characterizes the distribution of the test statistic under the selective inference framework. The derivation appears correct and follows known techniques in selective inference literature. The modular treatment of pipeline components is a nice extension of this theory. Experimental Designs Or Analyses: I have carefully looked at the experimental section in the main part of the paper. The design is clear, comparisons are appropriate, and the analysis effectively demonstrates the strengths of the proposed method. Supplementary Material: I reviewed some of the theoretical content, particularly Theorem 3.1, as well as the additional experimental results in Appendix D. The theorem is clearly presented and builds on established selective inference techniques, adapting them thoughtfully to the setting of pipeline-based feature selection. Appendix D provides further empirical evidence supporting the paper’s main claims, including evaluations on multiple datasets and a discussion of computational overhead. These results are consistent with the claims made in the main text and further demonstrate that the proposed method maintains statistical validity while successfully identifying meaningful features. Relation To Broader Scientific Literature: This paper extends recent work in post-selection inference to the setting of feature selection pipelines, where rigorous statistical testing is often lacking. Its main contribution is the development of a framework that supports modular pipelines—composed of multiple data-dependent preprocessing and selection steps—while still enabling valid p-value computation for selected features. This represents a meaningful advance beyond prior work focused on single-step selection procedures. Essential References Not Discussed: None identified. Other Strengths And Weaknesses: Strengths: 1. The application of selective inference to this setting is a compelling and timely contribution. 2. The framework appears practical, modular, and generalizable to many pipelines used in practice. 3. The paper is well written, and the problem motivation is very clear. Weaknesses: 1. Figures (especially Figure 3) could be improved to make differences between methods more visually accessible. Other Comments Or Suggestions: Line 110: "AD – anomaly detection?" → consider changing to OD - "outlier detection" for consistency with terminology used elsewhere. Questions For Authors: 1. Figure 3: It is difficult to distinguish between the performance of various methods, as many lines are overlapping. Consider using different linestyles, colors, or markers to more clearly show the differences. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for your feedback. > Line 110: "AD – anomaly detection?" → consider changing to OD - "outlier detection" for consistency with terminology used elsewhere. Thank you for pointing this out. We change the term to OD in the revised manuscript. > Figure 3: It is difficult to distinguish between the performance of various methods, as many lines are overlapping. Consider using different linestyles, colors, or markers to more clearly show the differences. We improve the figure to make it easier to distinguish between the different methods in the revised manuscript.
null
null
null
null
null
null
null
null
Sampling Binary Data by Denoising through Score Functions
Accept (poster)
Summary: This paper proposes a generative model for data on the boolean hypercube. The basic idea is to "noise" the data with random sign flips, with expected number of sign flips controlled by a parameter $\alpha$, and then learn to "reverse" this operation which ultimately boils down to learning a denoiser in this paper. The denoiser takes as input a sample from the sign-flipped data distribution which is denoted by $q_\alpha$. The overall sampling procedure proceeds as follows: first sample $Y \sim q_\alpha$ then query the denoiser at $Y$. This sampling procedure introduces a specific compromise: as $\alpha$ is close to $0$, $q_\alpha$ becomes closer to the uniform distribution and so is easier to sample but then the law of ${denoiser}_{\alpha} (Y)$ can be very different from the target distribution. A workaround is to include multiple measurements i.i.d. $Y_1, ..., Y_m$. In this case the denoiser takes as input these measurements altogether and its denoising performance, as measured by the closeness of the law of $denoiser_\alpha(Y_1, ..., Y_m)$, improves. Having trained the denoiser, sampling is performed by drawing approximately from the joint marginal distribution of $Y_1, ..., Y_m$. This is done by done first sampling $Y_1 \sim q_\alpha$ and then consecutive conditionals $Y_k | Y_{1:k-1}$. Crucially, we can take $\alpha$ to be small in this case so as to ensure that the distribution is easy to sample while at the same time having improved denoiser performance due to the multi-measurements. Since we are dealing with a discrete space, the authors proceed to develop a gibbs sampler for the initial distribution. Claims And Evidence: The paper makes a few interesting theoretical claims about the method which helps the reader gain intuition about how the proposed sampler behaves. In terms of evidence, most of them are addressed with theoretical results. One point which I believed is not substantiated with some evidence is the sampling of the conditional distributions. It seems that this is not discussed and is assumed to be rather easy to solve? Methods And Evaluation Criteria: In my opinion the methodology lacks motivation. For example, the popular denoising diffusion models proceed by reversing a forward Markov chain that turns the data into noise. The initial distribution of this reverse process can be sampled almost exactly and then each transition in the backwad process can be sampled quite accurately as long as we have a well-trained denoiser. This paper takes an orthogonal approach by instead relying on multiple measurements that are at the same "noise level". Due to this design choice, the initial distribution is not trivial to sample from and requires quite a machinery. The authors derive (partial) theoretical guarantees for the sampler they use for this initial distribution but still, the distance of the stationary distribution of their sampler to that of the target is not necessarily small. Furthermore, I believe that no guarantees are given for the conditional distributions that need to be sampled afterwards. Essentially, I do not think that the paper gives compelling arguments as to why this is needed and what is its advantage compared to more traditional approaches. For example one could think about designing a forward Makov chain of sign flips, and this would converge to the uniform distribution on the hypercube exponentially fast. This Markov chain can then be reversed and having learned the "denoiser", I believe that the sampling could be done straightforwardly. This is essentially the analog of the approaches considered in the Masked Diffusion literature [1]. I would really like to understand in which cases this approach is more appealing. [1] Sahoo, S., Arriola, M., Schiff, Y., Gokaslan, A., Marroquin, E., Chiu, J., Rush, A. and Kuleshov, V., 2024. Simple and effective masked diffusion language models. Advances in Neural Information Processing Systems, 37, pp.130136-130184. [2] Shi, J., Han, K., Wang, Z., Doucet, A. and Titsias, M., 2024. Simplified and generalized masked diffusion for discrete data. Advances in neural information processing systems, 37, pp.103131-103167. Theoretical Claims: The proofs are, as far as I am concerned, correct. Experimental Designs Or Analyses: The experiments are sound; there is a toy experiment in which everything can be computed in closed form. It allows the reader to get a good grasp of how the various hyper parameters in the paper influence the final performance. The qualitative analysis of this experiment is interesting. The algorithm is then tested on a simple image experiment. Here again the qualitative analysis of the hyperparameters is sound and interesting. It seems to me that this section lacks a comparison with existing methods that showcases its appeal. Supplementary Material: I have checked the proofs in the supplementary material. Relation To Broader Scientific Literature: The paper builds on ideas present in the paper [1]. Still, there are significant differences in terms of scope (here discrete state space) and overall the methodology. The paper is also related to recent works on discrete diffusion which are cited in the paper. I believe that the contribution in this paper is novel wrt these prior works. However, besides emphasizing the difference in methodology with these works, the paper does not discuss the main practical interest of their method vs these prior works. [1] Saremi, S., Park, J.W. and Bach, F., 2023. Chain of log-concave Markov chains. arXiv preprint arXiv:2305.19473. Essential References Not Discussed: While not essential, the authors should also discuss the recent works [1, 2]. [1] Sahoo, S., Arriola, M., Schiff, Y., Gokaslan, A., Marroquin, E., Chiu, J., Rush, A. and Kuleshov, V., 2024. Simple and effective masked diffusion language models. Advances in Neural Information Processing Systems, 37, pp.130136-130184. [2] Shi, J., Han, K., Wang, Z., Doucet, A. and Titsias, M., 2024. Simplified and generalized masked diffusion for discrete data. Advances in neural information processing systems, 37, pp.103131-103167. Other Strengths And Weaknesses: **Strength**: I would like to emphasize that the paper presents interesting ideas and the theoretical results provided are nice to have. **Weaknesses**: The paper could be challenging to follow for readers less familiar with sampling methods. For instance, providing a clear algorithm would help readers better understand and implement the proposed method. The paper also lacks explanations and discussions, and is poorly written at times. Other Comments Or Suggestions: I have no further comments or suggestions besides what is mentioned above. Questions For Authors: - Can you give explicit details about how you sample the conditional distributions? Do you use the discrete Langevin sampler? - What happens in practice when you set $\alpha$ to be very close to $0$ and use a very large number of measurements? - Is it useful for the two-stage sampler to start with a large $\eta$ and then decrease slowly decrease it? - Once you decrease $\alpha$ significantly and then scale $m$ accordingly so that the denoiser performance is decent, the initial distribution of $(Y_1, \dotsc, Y_m)$ should be very close to a uniform. In this case, the 2-stage sampler would no longer be needed. Can the authors discuss this setting? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your review. Could the reviewer comment on "the authors proceed to develop a Gibbs sampler for the initial distribution"? The algorithm we have studied is not a Gibbs sampler. Regarding "In my opinion, the methodology lacks motivation", our motivation is that in many applications, a single noise level is enough, which in addition can be extended by adding a single hyperparameter (the number of measurements) to improve sample quality. Regarding "Essential References Not Discussed", we'd be happy to add the references mentioned by the reviewer. Regarding the comment that the paper "is poorly written at times", could the reviewer comment on where it is poorly written? We'd be happy to improve the presentation. Regarding the question on sampling from the conditional distributions, this is explained below Eq. 4. The conditional distributions also satisfy the assumptions (4), which essentially follow from the identities in Section 2. This is also quite intuitive, as sampling from conditional distributions should not be harder than sampling the first noisy measurement. Given this question by the reviewer, we will clarify this further in our revision. Regarding "Questions For Authors": - Yes for sampling from the measurement-conditioned distributions, we can use the discrete Langevin sampler. - Changing the step size is a good suggestion for future work. Here, we took a simple approach, choosing a fixed step size, which we also did not tune. The step size is simply set to $1/\alpha$. - This is an interesting regime. However, the tradeoff is that for small $\alpha$ sampling each measurement becomes easier but then one has to wait longer in measurement accumulation to have good denoised samples. For some problems even a single measurement may be enough, but then one shouldn't use a very small $\alpha$. We do not believe there is a general answer on which strategy is better as this is highly problem dependent. **GENERAL RESPONSE**: We would like to thank all the reviewers for their detailed feedback, highlighting the novelty and simplicity of our framework and providing pointers to the literature on discrete diffusion. We address the common concerns here. Three reviewers brought up references on discrete diffusion that we're happy to add to our literature review. We'd also be happy to extend our discussion of the diffusion approach vs. ours (non-SDE, single noise level) in the related work section. The closest reference to our paper is the concurrent paper by Le Tuyet et al., Discrete Markov Probabilistic Models (posted first on arXiv after the ICML deadline). We're happy to discuss this concurrent work as well. In short, their method is very different from ours, as it involves a "continuous-time Markov chain." Several reviewers commented on the need for pseudocode and schematics. Thank you for this constructive feedback. We will add pseudocode and a schematic in our revision. Regarding novelty, please note that there are no known results on the mixing time of discrete Langevin MCMC, and our convergence results also improve upon the known results [1]. In their paper, they assume the target distribution is quadratic (Eq. 5 in [1]), which we relax substantially with our assumption (4) in our paper. [1] Zhang et al. (2022) A Langevin-like Sampler for Discrete Distributions --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. I am not sure I understand what is wrong with the claim that the algorithm used to sample from the initial distribution is an (approximate) Gibbs sampler. The joint density $q(y, z)$ has $q$ as marginal wrt $y$ and the proposed algorithm proceeds by iteratively sampling each conditional, with one step (the conditional $y|z$) implemented approximately. > the methodology lacks motivation Overall, the steps of the algorithm are: first use a two-stage sampler for the initial distribution, then use discrete Langevin steps to sample from each conditional distribution, then denoise. It feels comparable to the initial paper on Diffusion models [1] where the authors use Langevin dynamics to sample from the transitions because it was not known back then that we could come up with simpler and more principled updates that do not require significant parameter tuning, and obtain much better performance on top of that. The fact that the method in the present paper has to resort to a two-stage sampler for the initial distribution and discrete Langevin for the conditionals feels like a set back and I am still not convinced why one would use this method and not simply define a forward Markov chain of sign flips and then reverse the process. The initial distribution and the transitions would be much simpler to sample. I agree that one noise level may be enough in some applications and this would be appealing if the initial distribution is easy to sample. But this is not the case and so the solution is to either add many more measurements at the same noise level. You then end up with a sequence of transitions that you need to sample. It is thus not clear that your method is more compute efficient than simply reversing a Markov chain. Given these concerns, I maintain my score. [1] Song, Y. and Ermon, S., 2019. Generative modeling by estimating gradients of the data distribution. --- Reply to Comment 1.1.1: Comment: We appreciate your continued feedback. Indeed, we add a Gibbs sampler in the two-stage case, but sampling in {-1,1}^d is not done by coordinate-wise steps; all coordinates are updated simultaneously. We can make this clearer in the revision. Please also note that in the vanilla (single-stage) case, there is only one set of coordinates, and all are updated simultaneously using the score function. Our paper analyzes both single-stage and two-stage samplers. The setting in our work is very different from that in the paper the reviewer cited. Here, we are interested in sampling from a distribution on the Boolean hypercube, specifically in a setting where Gaussian noise is not allowed. We already have a discussion of the diffusion literature in the Gaussian case in the introduction, but we would be happy to extend that discussion, as we stated in our general response. The literature on discrete Langevin MCMC is very new, and as we have emphasized, there is simply no convergence analysis in the general case. Our paper addresses this with Propositions 3.1 and 3.3. In addition, we prove results regarding the mixing time of the discrete Langevin algorithms (Propositions 3.2 and 3.4). We believe that none of these results are related to the paper the reviewer cited. The paper most relevant to our work is the concurrent SDE-based work by Le Tuyet et al., which was cited by Reviewer Fbys. Compared to the concurrent work, we can make much bigger steps and require fewer score functions to learn (one with a single noise level, and only $m$ in the general case). Regarding models with a single noise level, we would also like to re-emphasize that there are cases where a single noise level is capable of achieving state-of-the-art results. We have already cited three such published papers in the introduction (by Pinheiro et al., Frey et al., and Kirchmeyer et al.). We'd be happy to extend that discussion and add more references.
Summary: This paper investigates the problem of sampling from distributions over the binary hypercube using a smoothing-denoising framework. Instead of adding Gaussian noise, the authors introduce a novel approach based on Bernoulli noise. To enhance convergence in the denoising step, they leverage proximal sampling methods. The effectiveness of their method is demonstrated through experiments on synthetic data and binarized images. Claims And Evidence: The claims are generally well-supported by proofs and empirical experiments. Methods And Evaluation Criteria: The proposed methods make a significant contribution to sampling from Boolean distributions. While smoothing discrete distributions using Gaussian noise has been extensively studied, the use of Bernoulli noise appears to be a novel and promising approach. A key advantage of Bernoulli noise is its potential to improve mixing time. However, I am uncertain whether a similar convergence rate could be achieved with Gaussian noise when combined with a proximal sampler. Additionally, while I am less familiar with the experimental aspects, I would be interested in understanding the feasibility of applying this method to high-dimensional settings. Theoretical Claims: I checked the proofs and didn't find problems. Experimental Designs Or Analyses: I am less familiar with the experimental aspects, so I only conducted a high-level review. Based on my assessment, the experimental design appears reasonable and generally supportive of the claims. Supplementary Material: I have reviewed the proof at a high level and did not identify any issues. Relation To Broader Scientific Literature: I don't know. Essential References Not Discussed: I did not find any essential related works that are missing. Other Strengths And Weaknesses: Strengths: - this paper is well written; - the use of Bernoulli noise instead of Gaussian noise is new for me; Other Comments Or Suggestions: - Line 161 "is equal is" -> "is equal to" Questions For Authors: - In Section 4, can Gaussian noise with proximal sampling achieve a comparable convergence rate? If that is the case, the advantage of using Bernoulli noise might be limited. - Is it possible to derive convergence results under other metrics, such as total variation (TV) distance? - Could you provide a justification for the assumptions made in Section 3? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review. Regarding "Other Comments Or Suggestions": - Thank you, we'll correct the typo. Regarding "Questions For Authors": - Thank you for the question. Intuitively, kinetic Langevin algorithms or Hamiltonian Monte Carlo also have an auxiliary variable ("velocity" in their case). However, making a deeper connection is a great research direction. - We believe more assumptions need to be made to extend our results to TV. - The assumptions made in Sec. 3 are akin to regularity assumptions (very common in the optimization literature) in the Euclidean case, which are adopted here for the binary case. In addition, they were required in our proofs in Sec. 3. We'd be happy to clarify this further in the paper. **GENERAL RESPONSE**: We would like to thank all the reviewers for their detailed feedback, highlighting the novelty and simplicity of our framework and providing pointers to the literature on discrete diffusion. We address the common concerns here. Three reviewers brought up references on discrete diffusion that we're happy to add to our literature review. We'd also be happy to extend our discussion of the diffusion approach vs. ours (non-SDE, single noise level) in the related work section. The closest reference to our paper is the concurrent paper by Le Tuyet et al., Discrete Markov Probabilistic Models (posted first on arXiv after the ICML deadline). We're happy to discuss this concurrent work as well. In short, their method is very different from ours, as it involves a "continuous-time Markov chain." Several reviewers commented on the need for pseudocode and schematics. Thank you for this constructive feedback. We will add pseudocode and a schematic in our revision. Regarding novelty, please note that there are no known results on the mixing time of discrete Langevin MCMC, and our convergence results also improve upon the known results [1]. In their paper, they assume the target distribution is quadratic (Eq. 5 in [1]), which we relax substantially with our assumption (4) in our paper. [1] Zhang et al. (2022) A Langevin-like Sampler for Discrete Distributions --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response. Overall, I find this to be an interesting piece of work. I just wanted to follow up with a question regarding the two-stage sampler: what are the main challenges in applying a Gibbs-type sampler to the Gaussian smoothing case discussed in Section 4? --- Reply to Comment 1.1.1: Comment: We’re glad you found our work interesting. We are not entirely sure what the question is referring to. In the Gaussian case, the most natural "Gibbs-type" algorithm that extends the unadjusted Langevin algorithm is the kinetic Langevin MCMC, also known as underdamped Langevin MCMC. In that case, one augments the distribution by adding an auxiliary ("velocity") random variable and alternates between updating the velocity and then the coordinates (see Eq. 1 in [1]). This is conceptually similar to the two-stage sampler we use, but technically these algorithms are very distinct. In particular, there is a lot of freedom in discretizing the kinetic Langevin diffusion, which is a very active area of research. The introduction of [2] contains a good literature review on this topic. One could, in principle, consider a case where the auxiliary variables are discrete. This is an interesting research direction, if the reviewer has that setting in mind. [1] Cheng, X., Chatterji, N. S., Bartlett, P. L., & Jordan, M. I. (2018). Underdamped Langevin MCMC: A non-asymptotic analysis. In Conference on learning theory. [2] Mou, W., Ma, Y. A., Wainwright, M. J., Bartlett, P. L., & Jordan, M. I. (2021). High-order Langevin diffusion yields an accelerated MCMC algorithm. Journal of Machine Learning Research, 22(42), 1–41.
Summary: The authors propose a method to sample binary (vector) data by denoising through score functions. They first propose a noise model for binary data where the noise corresponds to a bit flipping with a given probability. Then, they construct the optimal denoiser based on Hamming loss and they show that it corresponds to a sign of a scaled score function (or a conditional expectation of clean data given the noise). Furthermore, they demonstrate better denoising performance provided that more noisy samples (of a fixed clean sample), are available. After that, they construct a sampling algorithm based on a two-stage discrete Langevin sampler (which employs Gibbs sampling), which allows them to produce more "noisy observations” of a fixed clean sample. The authors complete their study with experiments in the case of binarized MNIST. Claims And Evidence: * Claim: A model for noise for binary vectors based on flipping the bits with given probability. Evidence: mathematical derivations * Claim: Optimal denoiser for this noise model given a Hamming loss which is given by a sign of a conditional expectation of clean data given a noise sample. Evidence: mathematical derivations and proofs * Claim: Connection of a conditional expectation of clean data given a noise sample, to a score function of a noise model. Evidence: mathematical derivations. * Claim: Approach to learn this score function via logistic regression. Evidence: Mathematical derivations. * Claim: Approach to use multiple noisy samples for a one fixed clean sample which allows to better approximate the distribution of clean samples. Evidence: lemma * Claim: Approach to use discrete Langevin algorithm to sample many noisy samples from a noise model. Evidence: derivations + proofs * Claim: The proposed approach has better mixing times than Gaussian-based Langevin. Evidence: Derivations and proofs * Claim: Empirical evidence that the proposed approach leads to a good approximation of a target distribution depending on the noise levels. Evidence: experiments. * Claim: Empirical evidence that the approach works with Binarized MNIST. Evidence: Experiments. I would like to challenge the claim that the approach of [1] is the first/only one to consider a generative model approach which does not rely on a SDE, as suggested by the end of the introduction "The main conceptual difference is that the multi-measurement sampling does not involve discretizing an SDE". The original formulation of DDPM, see [2,3] for instance, also does not include any SDE discretisation. Similarly, the fact that only one noise level is required is not a specificity of the given model. Such approaches are common in the image processing literature with the Plug-and-Play approaches [4]. Similarly, I think the claim that all approaches that build on diffusion models require a forward backward formulation (in the discrete setting as emphasized in Section 1.2) is a bit misleading. For example [5] is not based on a SDE perspective (I concede that they use multiple and different noise levels). In particular, there is no discretization approximation in this work. [1] Saremi et al. (2024) Chain of log-concave Markov chains [2] Sohl-Dickstein et al. (2015) Deep Unsupervised Learning using Nonequilibrium Thermodynamics [3] Ho et al. (2020) Denoising Diffusion Probabilistic Models [4] Romano et al. (2016) The Little Engine that Could: Regularization by Denoising (RED) [5] Austin et al. (2020) Structured Denoising Diffusion Models in Discrete State-Spaces Methods And Evaluation Criteria: Most of the paper provides mathematical justifications together with error bounds for all the claims. Moreover, it provides empirical evidence highlighting all the theoretical claims made in the paper. The paper is mostly theoretical. The experimental setup of the paper is however quite weak and does not emphasize the scalability of the method. Theoretical Claims: I skimmed through the appendix and checked most of the proofs. Looks correct. Most of the theoretical claims have already been established in the literature. The main results of the paper in my opinion are Proposition 3.1 and Proposition 3.2. Experimental Designs Or Analyses: Experiments are quite simple and straightforward, no concerns for validity. Supplementary Material: I looked into supplementary material, it mainly contains the proofs to the claims in the paper. Relation To Broader Scientific Literature: This paper takes a very different approach to discrete modeling compared to Masked diffusion approach and discrete-from-continuous approaches. Essential References Not Discussed: I think a few recent references are missing. Notably, there has been some great advances in the direction of discrete diffusion such as [1,2] and the references therein. I think it would be worth emphasizing the connections between the current approach and these works. [1] Shi et al. (2024) Simplified and Generalized Masked Diffusion for Discrete Data [2] Zhao et al. (2024) Unified Discrete Diffusion for Categorical Data Other Strengths And Weaknesses: I think the paper is well written, all the claims are theoretically justified. The method is a straightforward extension of [1] so the novelty is limited. However, I appreciate the current presentation. The experiments are quite weak. [1] Saremi et al. (2024) Chain of log-concave Markov chains Other Comments Or Suggestions: I suggest to the authors to add a discussion on how they see this work could scale to more complex discrete distributions (i.e., even just higher dimensional colored images?). What if the data is not binary, how would the authors approach it? I appreciate that the authors added a discussion that this approach might be applicable to a broader set of exponential families distributions. It would be interesting to have a slightly more concrete discussion for a path forward. Questions For Authors: Motivation: the motivation as to why consider only level of noise (even in the multisample case) is a bit unclear to me. One could say that we avoid the discretisation of the SDE but there is still some discrepancy between the target measure and the approximated one. Overall, I think that the authors should be able to answer the following question: why would a user pick this method over discrete diffusion or autoregressive models? Scalability: How would it scale for non-binary discrete data and to a high dimensional colored images? The current experimental setup does not bring a lot of information regarding the efficiency of the method. Baseline: no baseline is presented in this work. As it is presented as an alternative to discrete diffusion models it would be interesting to compare the performance in this case Ethical Review Concerns: No concerns Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. Please note in the original DDPM formulation, there is indeed an SDE behind the scenes. In other words, for understanding the theoretical properties of DDPM one has to return to the SDE formulation. The case of measurement accumulation is discrete by nature. Algorithmically, the methods are differentiated as in the DDPM case, one needs to come up with a forward and backward "noise schedule" and the case of measurement accumulation one has to only pick the number of measurements. Regarding the comment on denoising based on "one noise level", we're aware of the extensive research on denoising in the computer vision and signal processing literature, but turning that into a sampling and generative modeling scheme is a not as well studied. For example, the reference Romano et al. (2016) is not a generative model -- there is no connection between the score function and the denoiser in the paper. We're not sure what the reviewer means by "Most of the theoretical claims have already been established in the literature". The results in Sec. 2 are straightforward generalizations of the results for the Gaussian case. Conceptually, the emergence of a smooth score function that's defined beyond the Boolean hypercube is new. All the results in Sec. 3 are new -- in particular, there is no mixing time result for discrete Langevin MCMC in the literature. We'd be happy to clarify this further in our revisions. Regarding "few recent references are missing", we'd be happy to add the references mentioned by the reviewer in our revisions. We disagree that this paper is "straightforward extension of [1]". The machinery for discrete sampling and its theoretical properties have little in common with the Gaussian case. But conceptually, we also consider it as an extension of the the previous approach and in our view it's the strength of the method, since we arrive an algorithm with only two hyperparameters ($\alpha$ and $m$). Our [1] Saremi et al. (2024) Chain of log-concave Markov chains Regarding "Other Comments Or Suggestions": - We have an extension of the approach for non-binary data, but it was beyond of the scope of the paper due to space. One simple approach would be to use the 1-hot encoding for any discrete data that turns it into binary data. Regarding "Questions For Authors": - Regarding "why would a user pick this method over discrete diffusion", the choice really comes down to the fact for some applications, single noise models give rise to strong empriical results. Please see the references in the introduction. The models we have proposed in our paper are much simpler in terms of hyperparameters (and arguably in terms of formalism), compared to discrete diffusion. Please also note that we did not do any tuning for our experiments - the step size was simply set to 1/$\alpha$. - Our paper is mainly theoretical and the experiments were designed to understand the method. There are extensive comparisons between different regimes in our experiments. The experiments on MNIST are qualitative to show the fast mixing of our algorithm. For this, we encourage the reviewer to compare our results with the concurrent work by Le Tuyet et al., as mentioned by the reviewer Fbys - in our case only one or two steps are required to arrive at a digit. Also, we don't know how to adapt the diffusion approach to give single (fast-mixing) MCMC chains we demonstrated in our experiments. **GENERAL RESPONSE**: We would like to thank all the reviewers for their detailed feedback, highlighting the novelty and simplicity of our framework and providing pointers to the literature on discrete diffusion. We address the common concerns here. Three reviewers brought up references on discrete diffusion that we're happy to add to our literature review. We'd also be happy to extend our discussion of the diffusion approach vs. ours (non-SDE, single noise level) in the related work section. The closest reference to our paper is the concurrent paper by Le Tuyet et al., Discrete Markov Probabilistic Models (posted first on arXiv after the ICML deadline). We're happy to discuss this concurrent work as well. In short, their method is very different from ours, as it involves a "continuous-time Markov chain." Several reviewers commented on the need for pseudocode and schematics. Thank you for this constructive feedback. We will add pseudocode and a schematic in our revision. Regarding novelty, please note that there are no known results on the mixing time of discrete Langevin MCMC, and our convergence results also improve upon the known results [1]. In their paper, they assume the target distribution is quadratic (Eq. 5 in [1]), which we relax substantially with our assumption (4) in our paper. [1] Zhang et al. (2022) A Langevin-like Sampler for Discrete Distributions
Summary: The authors introduce a denoising sampling algorithm for distributions supported on the d-dimensional Boolean hypercube. In this discrete context, Gaussian noise does not exist and therefore one has to argue differently. The idea is to noise the target distribution by flipping each coordinate with a given probability, which depends upon a parameter $\alpha$. The smaller $\alpha$, the more noise is added to the target distribution. Next, leveraging a clean expression of an optimal denoiser through a conditional expectation, an approximate denoiser is learned from noisy samples using either logistic regression or a least-squares loss. Remarkably, the optimal denoiser admits a representation as the rescaled (continuous) gradient of the logarithm of the noisy distributions. This in parallel with well known results for Gaussian noising. A theoretical bound on the denoising performance is provided at Lemma 2.3. Section 2.4 explores adaptation of this framework to multiple measurements. That is, when the same sample is noised several time by the addition of i.i.d. noise. Section 3 considers the problem of sampling from the noisy distribution. First, a one-stage discrete Langevin sampler is proposed. Contractivity properties under assumptions on $\alpha$ of the sampler are proven in Proposition 3.1 and a control between the ergodic distribution of the sampler and the noisy distribution is the object of Proposition 3.2. Next, a two-stage Langevin sampler is considered and the same results as for the one-stage sampler are proven. The paper is concluded by a section that contains numerical experiments; first synthetic data are considered in the form of mixtures of two independent binary vectors. Then, the binary mist dataset is considered. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes they are appropriate Theoretical Claims: I checked the correctness of the proofs which are not in the appendix. Experimental Designs Or Analyses: I checked the soundness of the experiments in section 5; they seem appropriate to me. Supplementary Material: I did not review the supplementary material Relation To Broader Scientific Literature: The contributions of this article fit into the general literature about generative modeling. There has been already a significant number of papers that proposed score-based methods to sample from discrete data distributions. As the authors correctly report, most of these method use a forward-backward construction based on continuous-time diffusion processes, inspired by diffusion models. The approach undertaken here works in a single denoising step, and does not appeal to discretization of continuous-time processes. Citing this structural difference, the authors do not enter into any detailed comparison with prior work. I believe at least some comments on why doing all in a single step is worthwhile is needed in section 1.2 Essential References Not Discussed: I am not aware of prior work that considers the same algorithm proposed in this work. I am aware of the concurrent work by Le Tuyet et al., which studies that same problem but from a different perspective. https://arxiv.org/abs/2502.07939 . There is a large literature on generative modeling based on Markov chains. I believe the paper "Discrete diffusion Schrodinger bridge matching for graph transformation" by Kim et al. published at ICLR2025 should enter the literature review. Other Strengths And Weaknesses: Points of strength * The paper addresses a fundamental problem, and I believe there is potential for several applications. * The proposed algorithm is new, and offers a simpler framework than most previously generative algorithms for generating discrete sequences. The training objective is explicit, and offers an interesting parallelism with Gaussian denoising algorithms. Weaknesses * The proposed guarantees of convergence are weak. Lemma 2.3 is interesting as it makes no assumptions on the target $p$. But at the same time, the proposed bound is very rough. One would expect to see that mild regularity assumptions on p such as, say, finte entropy translate into better bounds. The authors should improve on this issue. * There is essentially no precise discussions about the comparison with generative algorithms based on forward-backward Markov chains. This is true for both theory and experiments. Other Comments Or Suggestions: * The authors should provide some schematic view or pseudo code for their algorithm(s) * The authors shall provide an intuition about the proof of the result of section 3. Do they follow from known results? * The authors should comment on why the parameter $\eta$ does not enter the bounds at Proposition 3.2 and 3.4 . It is not very clear to me Questions For Authors: 1) The log-concavity estimate (8) tells that the modulus of log-concavity of $q$ is independent on the dimension $d$. Of course, this is false when noising in a Gaussian context.Can the authors comment on this? I suspect that this is because noising at level $\alpha$ in truth means that the computational cost of this operation is of order $\alpha d$. If so, it would be more correct to rewrite the results substituting $\alpha$ with $\alpha/d$. 2) I am confused from the statement right after Propostion 3.1. The authors seem to claim that the condition $\alpha\leq 1/(4\sqrt{d})$ implies the condition $4\alpha^2de^{2\alpha}\leq 1$. This claim seems false to me because of the term $e^{2\alpha}$ and if so it would have an important impact on the theoretical guarantees of the sampling part. It is likely that I am missing something obvious here, but I would still like the authors to clarify this point Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. Regarding "Essential References Not Discussed", the concurrent work by Le Tuyet et al. (the paper became public after the ICML submission deadline) is different from our paper, as it is based on a "continuous-time Markov chain," but both address the problem of sampling from a distribution on the Boolean hypercube. We'd be happy to cite the concurrent work as well as the paper by Kim et al. that the reviewer mentioned. Regarding "Weaknesses": - Improving the bound in Lemma 2.3 is a great research direction, but we're not sure what the reviewer means by "finite entropy." Entropy is always finite in the discrete space. - Regarding "precise discussions," please see our general response. In short, we'd be happy to extend our discussions of the diffusion literature. Regarding "Other Comments Or Suggestions": - Regarding "pseudo code," please see our general response. - The results in section 3 do not follow from known results. - The parameter $\eta$ does enter in Proposition 3.4 (see the condition in the proposition; $\eta$ cannot be too large, depending on $\beta_1$). The reviewer has a good point regarding $\eta$ not entering in the distance to the stationary distribution, which we'd be happy to emphasize too in our revision. But please note that, in both cases, based on Propositions 3.1 and 3.3, if $\eta$ is too small the mixing time will be too large, which is also quite intuitive. Regarding "Questions For Authors": - The main difference between the Bernoulli and the Gaussian case is that the the modulus of the random variables is constant on the Boolean hypercube. This may answer the reviewers question regarding "the modulus of log-concavity". We're not sure what the reviewer means by "computational cost" in this question. The motivation for swapping $\alpha$ with $\alpha/d$ is not clear either. The parameter $\alpha$ has the semantics of the noise parameter for each coordinate, and it's chosen due to its connection between $q_\alpha$ and the denoiser (the unnumbered equation in page 2). - Since $0 < 2\alpha \leq 0.5$, we have $e^{2\alpha} \leq 2$. It then follows that $4 \alpha^2 d e^{2\alpha} < 0.5$. We will clarify this in the paper. **GENERAL RESPONSE**: We would like to thank all the reviewers for their detailed feedback, highlighting the novelty and simplicity of our framework and providing pointers to the literature on discrete diffusion. We address the common concerns here. Three reviewers brought up references on discrete diffusion that we're happy to add to our literature review. We'd also be happy to extend our discussion of the diffusion approach vs. ours (non-SDE, single noise level) in the related work section. The closest reference to our paper is the concurrent paper by Le Tuyet et al., Discrete Markov Probabilistic Models (posted first on arXiv after the ICML deadline). We're happy to discuss this concurrent work as well. In short, their method is very different from ours, as it involves a "continuous-time Markov chain." Several reviewers commented on the need for pseudocode and schematics. Thank you for this constructive feedback. We will add pseudocode and a schematic in our revision. Regarding novelty, please note that there are no known results on the mixing time of discrete Langevin MCMC, and our convergence results also improve upon the known results [1]. In their paper, they assume the target distribution is quadratic (Eq. 5 in [1]), which we relax substantially with our assumption (4) in our paper. [1] Zhang et al. (2022) A Langevin-like Sampler for Discrete Distributions
null
null
null
null
null
null
Do We Need to Verify Step by Step? Rethinking Process Supervision from a Theoretical Perspective
Accept (poster)
Summary: The paper’s central takeaway is that offline RL under trajectory-wide sparse rewards (for LLMs, outcome supervision) and dense rewards (process supervision) are statistically equivalent (up to polynomial factors in the horizon). The main contribution is the Change of Trajectory Measure Lemma, a powerful result that asymptotically bounds the quotient of the second moments for trajectory distributions from two different policies of a function on (s, a) added across the trajectory. Crucially, the bound is based on the horizon and the state-action-level distribution shift between the policies. This Lemma is then used to show the aforementioned statistical equivalence (up to polynomial factors in the horizon) between outcome and process supervision, as well as tighter results for Preference-based RL with explicit reward modeling and DPO. Finally, the paper analyzes two common approaches to learning a process reward from outcome supervision, showing rewards that approximate the advantage function are theoretically coherent. In contrast, rewards that approximate the Q function are not consistent. ## Post–Rebuttal I have read the concerns from other reviewers and the responses from the authors, and I maintain my original assessment. Claims And Evidence: Most of the claims made are supported by a complete and novel theoretical analysis. Nonetheless, I believe the abstract and introduction do not state clearly enough that some of the provided results (Thm 1, Cor 2, Thm 4, and Thm 5) require the assumption of a finite hypothesis set, which impacts practical applicability. Similarly, the results provided (Thm 1, Cor 2) are limited to offline RL settings, which should also be clarified more explicitly in the introduction. Methods And Evaluation Criteria: Yes, claims are theoretical and accompanied by corresponding proof. Consequently, there are no empirical results, but none appear required for the paper’s soundness. Theoretical Claims: I carefully verified the proofs for the core Lemma 3 and Theorems 1 and 7. I also reviewed the proofs of the remaining results but did not reproduce them with the same level of detail. Everything seems correct. Experimental Designs Or Analyses: There are no empirical results. Supplementary Material: The supplementary material seems to coincide with the primary submission. Relation To Broader Scientific Literature: To the best of my knowledge, the results presented in this paper are novel and very valuable. Although their positioning focuses on the timely and active field of outcome and process supervision, their result seems to extend naturally to any offline RL method with trajectory-wide sparse rewards, making the analysis broadly impactful. More importantly, their main result, Lemma 3, is powerful and widely applicable even beyond offline RL. Their paper already shows its flexibility through the various results they derive from it, ranging from offline RL in general and outcome supervision to preference-based learning. Future works will likely benefit from and build on these results. More specifically, position-wise, this paper is tightly relevant to recent works in LLMs where either (1) only rewards for the outcome (outcome supervision) or (2) step-by-step rewards (process supervision) are available. The community has recently been unable to reach a consensus on which of these approaches is most compelling overall, as adequately discussed in the paper. On the one hand, process supervision provides a richer signal that aids learning, while outcome supervision facilitates data collection. This paper examines the situation from a theoretical perspective, proving that outcome supervision is statistically no harder than process supervision (up to poly(H) factors), providing a solid argument to challenge what’s considered the main limitation of outcome supervision with respect to process supervision. Regarding the framework adopted for the theory, this paper works on the “state-action coverage” assumption, which, as cited correctly in the paper, is a widely used measure of statistical difficulty in the offline setting. While this magnitude is natural for process supervision, the equivalent for outcome supervision would be the much looser trajectory coverage. Through lemma 3, this paper not only tightens results that naturally depend on trajectory-level coverage, such as therein cited reward-modeling methods (e.g., DPO), but more generally bridges an important gap in OPE (as properly cited). Finally, the paper also provides compelling theoretical grounding for advantage-function-based “process” rewards, as done in some recent works, while shining a light on the limitations of Q-function-based rewards, as done in others. Indeed, the paper shows that the former preserves optimal policies while the latter doesn’t. Overall, this paper significantly advances offline RL theory, with direct impact and insight on LLM post-training. Essential References Not Discussed: I don’t have a specific set of missing works that should be cited, but I would propose some parts of the paper that would benefit from appropriate citations. Proofs such as the one of Lemma 3 seem to use relatively standard ideas from the literature. I don’t want to push any specific work that inspired some of these ideas, but I believe it would be good if the authors would acknowledge the works that most influenced their approach to the proofs. I would also appreciate relevant citations in the second column in L416 after “widely used.” Other Strengths And Weaknesses: I believe the above conveys the strengths I find in the paper pretty comprehensively, as well as the main weaknesses (not stating adequately in the abstract and intro the offline setting and finite hypothesis set). In terms of weaknesses, as usual, proofs could contain more text to ensure they are pleasant and amenable to the reader. Also: * L179, C2. To use the union bound, and of course, as is apparent from the result, a big assumption is the finiteness of |R|. This can probably be relaxed in the future, but as it currently stands, this is a big assumption that should be more clearly stated. Other Comments Or Suggestions: * L074, C2. The notation probably belongs more to the background than an introduction, and given that both the O notation and sigmoid are reasonably common, I would remove this section and introduce the notation the first time it appears. * L075, C2. I would encourage using only the O notation (or the other), not both. * L111, C2. RL is used without relating it to “Reinforcement Learning,” and “Reinforcement Learning” is probably fully spelled too many times, as well as “Large Language Model.” * L168, C2. Here, it would be nice to clarify that you are referring to state-action rewards more explicitly, maybe through using the r notation to distinguish from R. I think this paragraph could be clearer. * L259, C1. This paragraph could be made clearer by saying: “Indeed, many classical offline RL algorithms ... are known to achieve exactly this sample complexity under standard coverage assumptions”. Otherwise, the two sentences might seem to make different points while trying to get to the same point. * L268, C1. Maybe it would be clearer to explicitly refer to “the single-policy concentration bound in Appendix C, ...” instead of "the data-dependent case.” * L270, C2. "till" -> "until" * L300, C1, and L863, among others. I would encourage using max and min instead of ∨ and ∧. * L694, I believe there is a typo here and f(s_h) should be r(s_h). Questions For Authors: I have no specific questions, but I welcome the author’s pointing out anything in my review that might suggest I have misunderstood anything. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you very much for your support. *** > Relevant Citations in L416 We will add the citations accordingly. *** > Assumption on finiteness of $|R|$ Thanks for pointing this out. Even though the current theorems are stated for finite reward models, the change-of-trajectory-measure lemma does not require the size of the reward class to be finite. We believe it is easy to generalize our results to an infinite-size parametric reward class, or even to a nonparametric reward class, with the help of classical tools for analyzing the ERM performance in parametric or nonparametric model classes. We will add an explicit discussion regarding the assumption on the finiteness of $|R|$ in our future version. *** > Typos Thanks for pointing out the typos. We will fix these in the revision.
Summary: This paper shows a theoretical analysis of the statistical equivalence between dense reward trajectories and sparse reward trajectories, which returns the total reward at the end of the episode in offline RL. **Post-rebuttal** I read the responses from the authors and other reviewers. I think this is a theory paper, as mentioned in the rebuttals, and it shows new findings. There is a weak connection to practical algorithm designs, but this paper offers valuable insights; difficulty in obtaining PRMs in practice, and several approaches are using outcome rewards to translate into process rewards. advantage function is better considered than the q-function. This paper will be more appreciated by people who are interested in theory. Extracting more valuable insights would be even more appreciated, and providing intuition about the assumptions would also increase the impact of this work. Finally, I increased the score, reflecting the significance of the work. Claims And Evidence: The main claims are given in the form of concentration inequalities. Methods And Evaluation Criteria: NA Theoretical Claims: No Experimental Designs Or Analyses: NA Supplementary Material: NA Relation To Broader Scientific Literature: This is a mathematical theory paper analyzing the error bounds in offline reinforcement learning, and I am not sure how much of the theoretical claims can actually be transferred to scientific literature. Essential References Not Discussed: No Other Strengths And Weaknesses: The strength is that the claim is very nice, if it is possible in practice. We don't need to annotate or learn a dense reward model, but sparse reward trajectories could solve all problems. The main weakness is that it is difficult to imagine how this result can be useful. Other Comments Or Suggestions: NA Questions For Authors: Could you provide any tangible procedure that could replace the PRM with ORM? In practice, what will be the difficulty of applying this idea? What assumption is not realistic? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the review. Below, we address the concerns raised in the review. *** > We don't need to annotate or learn a dense reward model, but sparse reward trajectories could solve all problems. The main weakness is that it is difficult to imagine how this result can be useful. > Could you provide any tangible procedure that could replace the PRM with ORM? > In practice, what will be the difficulty of applying this idea? Thanks for the feedback. We suspect that there might be some misunderstanding regarding the terminology and our focus. In this paper, we focus on statistical complexity by using an information-theoretic perspective to distinguish outcome/process supervision—that is—whether the external supervision signals are trajectory-wise or step-wise (see Section 2.2 for detailed explanation). Under our formulation, approaches that learn PRMs from outcome labels are considered outcome supervision, because these algorithmic interventions do not improve the statistical limits. Our primary focus is on statistical complexity rather than algorithmic design. The central takeaway is a theoretical result: RL from sparse trajectory rewards is not, information-theoretically, significantly harder than RL from dense step-level rewards, despite the ostensible gap in information. In terms of tangible procedures, many recent empirical studies that train PRMs using outcome labels [1,2,3] are, in our terminology, outcome-supervised (again, because the algorithmic aspects do not improve the statistical limits), which supports rather than contradicts our theoretical results. Our theory also aligns with the recent successes of outcome-based RL in the LLM industry (e.g., DeepSeek-R1, Kimi-K1.5), which further exemplify a tangible outcome-supervision procedure. *** > What assumption is not realistic? Note that our paper follows the most general assumptions of RL theory with general function approximation. In fact, perhaps surprisingly, our core technical contribution—the Change of Trajectory Measure Lemma (Lemma 3)—only requires the Markov property and does not rely on any structural assumption regarding the environment’s dynamics or function class. This means that Lemma 3 is widely applicable to any Markov decision process, beyond the problem in this paper. We suspect that this question from the reviewer arises because our key message (that outcome supervision is not much harder than process supervision) seems to contradict existing empirical results. However, this is just a matter of terminology, as we discussed above. Again, empirical successes in learning PRMs from outcome labels should be considered outcome supervision in our terminology, which supports rather than contradicts our theoretical results. *** References: [1] Luo, Liangchen, et al. "Improve mathematical reasoning in language models by automated process supervision." arXiv preprint arXiv:2406.06592 2 (2024). [2] Wang, Peiyi, et al. "Math-shepherd: Verify and reinforce llms step-by-step without human annotations." arXiv preprint arXiv:2312.08935 (2023). [3] Setlur, Amrith, et al. "Rewarding progress: Scaling automated process verifiers for llm reasoning." arXiv preprint arXiv:2410.08146 (2024).
Summary: The paper compares process and outcome supervision in reinforcement learning from a theoretical perspective. The central result is showing that RL outcome rewards is no more statistically difficult than RL with step-level rewards, which is shown via a result that states that an offline RL dataset with outcome rewards can be transformed into a dataset with process rewards with minimal loss in statistical efficiency. Claims And Evidence: The theoretical claims all seem plausible, although I did not check the proofs in detail. However, I think some of the claims in the introduction and general framing of the paper are too strong. In general, the results in the paper are about RL with goal-based reward and step-level reward. I don't think there is anything specific to RL with LLMs. Still the introduction heavily discusses work with LLMs suggesting particular applicability to that context. In fact, I think the applicability is probably lower because the reward signals in LLMs are often provided by human overseers. The main claim of the paper is that we can draw conclusions about which of the two methods should work better in practice based on statistical difficult metrics of the learning problems. While I think the results are interesting and provide some evidence about this question, I think the introduction is vastly overclaiming the applicability to practical choices. Methods And Evaluation Criteria: The proposed theoretical analysis is a valid way to gain insight about the comparison between process and outcome supervision. However, the analysis neglects practically important aspects such as where do the process and outcome rewards come from. The paper shows a way of transforming process into outcome rewards and vice-versa but it is unclear what this implies in practice. For example, the quality of outcome and process rewards can vary widely if the rewards are given by humans. In some domains it is easier for humans to provide outcome rewards and in other domains it's easier to provide a process reward signal. So, overall I'm not drawing any strong conclusions from the results in this paper. Theoretical Claims: I did not check any of the proofs in detail and I do not have the relevant background to evaluate whether the proofs are correct. Experimental Designs Or Analyses: No experiments. Supplementary Material: Did not review. Relation To Broader Scientific Literature: The paper provides a theoretical perspective to compare outcome and process supervision which in this form is novel. Essential References Not Discussed: I was surprised to not see essentially any discussion of pre-LLM RL. In particular work on goal-conditioned RL and step-level feedback seems relevant. For example potential-based reward shaping is a way to transform a sparse ("outcome") reward into a shaped ("process") reward. Other Strengths And Weaknesses: - Other Comments Or Suggestions: Overall I'm very unsure about this paper because I don't have the necessary expertise to evaluate the validity and importance of the theoretical claims. From a practical perspective I'm skeptical that the results are particularly useful. But I would support accepting the paper if other reviewers find the results interesting from a theoretical perspective. Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the review. Below, we address the concerns raised in the review. *** > I think some of the claims in the introduction and general framing of the paper are too strong … I think the applicability is probably lower because the reward signals in LLMs are often provided by human overseers. > The analysis neglects practically important aspects such as where do the process and outcome rewards come from … overall I'm not drawing any strong conclusions from the results in this paper. If our understanding is correct, the main concern of the reviewer is that the outcome reward and process reward in practice often come from different sources with varying quality, so it is not practical to directly compare their sample complexity without considering these potential differences in quality. We appreciate this feedback; however, we would like to clarify that our paper primarily focuses on RL from outcome reward, because RL from process reward is already extensively studied in theory. This paper only considers the case where the outcome reward has the same quality as the process reward (i.e., outcome reward = sum of process reward), so that learning from process reward is always easier. We then investigate how much more difficult RL from outcome reward can be compared with process reward. This is well motivated by recent LLM reasoning problems, as checking the correctness of the entire solution is not more difficult than verifying each step. Our major result is both simple and somewhat counterintuitive: **we prove that RL from outcome reward is not much harder than RL from process reward, information-theoretically**, even though process reward seems to provide far more information for credit assignment. We believe this theoretical conclusion is significant, because it coincides with and explains recent surprising successes of outcome-based RL in the LLM industry (e.g., DeepSeek-R1, Kimi-K1.5). Finally, we emphasize that our submission is a theoretical paper, submitted to the theory category. Besides our main message, we present the novel Change of Trajectory Measure Lemma, which is the first result addressing trajectory-level change of measure with only step-level distribution shift. This lemma has broader applicability in RL theory beyond this paper. Hence, our theoretical results, rather than the outcome-to-process transformation itself, represent our core contribution and substantially deepen the theoretical understanding of reinforcement learning. *** > Pre-LLM RL discussion We do include substantial discussion of pre-LLM RL throughout the paper. For instance, Section 2 addresses coverage assumptions in the classic RL literature, and Section 3 discusses offline RL and preference-based RL. We also provide a Related Work section in the appendix, which includes references to pre-LLM RL research that is not covered in detail in the main text. Furthermore, we will add a discussion of RL with trajectory feedback and reward shaping to the Related Work section in our revision.
Summary: This paper challenges the conventional belief that process supervision (step-wise rewards) is statistically superior to outcome supervision (cumulative rewards) in reinforcement learning, particularly for complex tasks like LLM reasoning. The main contributions are: 1. **Theoretical Equivalence**: Under standard data coverage assumptions, outcome supervision is shown to have comparable statistical complexity to process supervision via a novel *Change of Trajectory Measure Lemma*. This lemma bounds trajectory-level distribution shifts using state-action concentrability, avoiding exponential dependence on horizon length. 2. **Advantage Functions as Optimal Process Rewards**: Theoretically, advantage functions of any policy can serve as optimal process reward models, while Q-functions may lead to sub-optimal results. 3. **Extensions to Preference-Based RL**: Improved sample complexity bounds for Direct Preference Optimization (DPO), replacing trajectory-level concentrability with state-action terms. 4. **Practical Algorithm**: A transformation method (Algorithm 1) that enables the use of process supervision algorithms with outcome supervision data. ## update after rebuttal Maintain original score. See reasons in rebuttal comments. Claims And Evidence: The paper's claims are generally well-supported by theoretical analysis and proofs: - **Supported Claims**: - The equivalence between outcome/process supervision under coverage (Theorems 1, 4) is rigorously proven using concentration inequalities and the Change of Trajectory Measure Lemma. - The optimality of advantage functions as reward models (Theorem 6) and the sub-optimality of Q-functions (Theorem 7) is validated via a constructed MDP counterexample. - The improved sample complexity for preference-based RL (Theorems 4, 5) is mathematically derived. - **Unsupported Claims**: - Practical implications (e.g., reduced need for process annotations) remain speculative due to the lack of empirical validation. The mathematical derivations are sound, though some proofs (particularly for Lemma 3) are quite technical and would benefit from more intuitive explanations. Methods And Evaluation Criteria: The theoretical framework is appropriate, leveraging established concepts (state-action concentrability, offline RL theory) to analyze the statistical complexity of different supervision paradigms. The paper focuses on theoretical analysis rather than empirical evaluation, which is reasonable given the nature of the contributions. The evaluation criteria, based on statistical complexity and sample efficiency, are appropriate for comparing the two supervision paradigms. However, the absence of experiments limits assessment of real-world applicability, especially given that the theoretical results contradict some empirical observations where process supervision often outperforms outcome supervision. Theoretical Claims: I checked the correctness of several key proofs: 1. **Change of Trajectory Measure Lemma (Lemma 3)**: The proof in Section B.1 is logically sound, using a careful decomposition of states into "good" and "bad" states and analyzing their probabilities under different policies. The technical approach effectively bounds trajectory-level distribution shifts using state-action concentrability. 2. **Theorem 1 (Learning a Reward Model from Total Reward)**: The proof correctly applies the Change of Trajectory Measure Lemma to bound the error between the true reward model and the learned reward model. 3. **Theorems 4 and 5 (Preference-Based RL)**: The proofs extend the main results to preference-based settings, correctly applying the Change of Trajectory Measure Lemma to derive improved sample complexity bounds. 4. **Theorem 6 (Advantage Function Learning)**: The proof correctly uses the performance difference lemma to show that the advantage function can serve as an optimal process reward model. 5. **Theorem 7 (Lower Bound on Failure of Using Q-Functions)**: The proof provides a valid counterexample showing that using Q-functions as reward models can lead to sub-optimal policies. No apparent errors were detected in the proofs, though some sections could benefit from more intuitive explanations. Experimental Designs Or Analyses: The paper is primarily theoretical and does not include empirical experiments. While the focus is theoretical, experiments on synthetic MDPs or LLM tasks would strengthen the practical relevance of the findings, especially given that they challenge conventional wisdom. The absence of empirical validation is a limitation, particularly since the theoretical results suggest a different conclusion than what is often observed in practice, where process supervision frequently outperforms outcome supervision. Supplementary Material: I reviewed the supplementary material, which includes: 1. **Related Work (Appendix A)**: Provides context for the paper's contributions in relation to process/outcome supervision, offline RL, and off-policy evaluation. 2. **Detailed Proofs (Appendix B)**: Contains comprehensive proofs for the main results in Section 3, including the Change of Trajectory Measure Lemma and extensions to preference-based reinforcement learning. 3. **ARMOR Algorithm Analysis (Appendix C)**: Shows how the transformation can be applied to a specific offline RL algorithm with total reward. 4. **Missing Proofs from Section 4 (Appendix D)**: Includes proofs for the advantage function learning results and the counterexample for Q-functions. The supplementary material is thorough and provides the necessary technical details to support the paper's claims. Relation To Broader Scientific Literature: The paper effectively connects to several areas of reinforcement learning literature: - **Offline RL**: Builds on work by Chen & Jiang (2019), Xie et al. (2021a), and others on concentrability coefficients and offline policy learning. - **Off-Policy Evaluation**: Relates to work by Uehara et al. (2020) on the change of measure problem and the distinction between state-action and trajectory coverage. - **Preference-Based RL**: Extends results to DPO (Rafailov et al., 2023) and provides improved sample complexity bounds. - **Process vs. Outcome Supervision**: References empirical work showing the effectiveness of process supervision (Uesato et al., 2022; Lightman et al., 2023). The paper's novelty lies in bridging outcome and process supervision via concentrability concepts and providing a theoretical foundation for understanding their relationship. Essential References Not Discussed: The paper mentions but could more thoroughly discuss recent works on automated process supervision that generate step-wise rewards from outcomes, such as: 1. Wang et al. (2024) "Math-shepherd: Verify and reinforce LLMs step-by-step without human annotations" 2. Luo et al. (2024) "Improve mathematical reasoning in language models by automated process supervision" 3. Setlur et al. (2024) "Rewarding progress: Scaling automated process verifiers for LLM reasoning" These works are briefly cited but not deeply analyzed, despite being highly relevant to the paper's claim that outcome supervision can be transformed into process supervision. A more detailed discussion would better contextualize the practical need for manual process labels and the empirical approaches already being used to address this issue. Other Strengths And Weaknesses: ### Strengths: 1. **Novel Theoretical Insight**: The Change of Trajectory Measure Lemma is a significant technical contribution that challenges a widely held assumption about the statistical complexity of outcome vs. process supervision. 2. **Non-trivial Connections**: The paper establishes important connections between advantage functions and process rewards, providing theoretical justification for certain empirical approaches. 3. **Comprehensive Analysis**: The theoretical analysis covers various aspects of the problem, including extensions to preference-based learning and the role of advantage functions. 4. **Practical Algorithm**: Algorithm 1 provides a concrete way to transform outcome supervision data into process supervision data, making the theoretical insights actionable. ### Weaknesses: 1. **Heavy Reliance on Coverage Assumptions**: The results depend on state-action concentrability coefficients (Csa), which may not hold in practice, especially in complex environments like those encountered in LLM training. 2. **Lack of Empirical Validation**: The absence of experiments weakens the impact of the theoretical results, especially since they contradict some empirical observations. 3. **Limited Intuitive Explanations**: Some of the proofs, particularly for Lemma 3, are quite technical and would benefit from more intuitive explanations. 4. **Disconnect from Practice**: The paper could more explicitly address why, if outcome and process supervision are statistically equivalent, empirical results often show process supervision outperforming outcome supervision. Other Comments Or Suggestions: 1. Section 4 on advantage function analysis could be better motivated and connected to the main thesis of the paper. 2. There are a few typos in the paper, including "contradistinction" (line 100) and "alterative" instead of "alternative" (line 171). 3. The paper would benefit from a discussion of limitations, particularly regarding the assumptions made in the theoretical analysis and how they might not hold in certain practical settings. 4. A simple illustrative example demonstrating the key insight of the paper would help readers grasp the main idea more intuitively. Questions For Authors: 1. How do your theoretical assumptions (e.g., Csa) align with real-world scenarios like LLM training, where state-action coverage is often poor? This would help clarify the practical applicability of your results. 2. Could the advantage function result (Section 4) be empirically validated on a simple MDP or LLM reasoning task? This would strengthen the connection between theory and practice. 3. Have you considered extending the analysis to settings with partial observability, which are common in sequential decision-making for LLMs? This would address a significant aspect of real-world applications. Responses to these questions would clarify the applicability of the theoretical results and guide future empirical work in this area. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. Below, we address the concerns raised in the review. *** > more thoroughly discuss recent works on automated process supervision [1-3] Thanks for pointing these out, we will add more detailed discussion in the updated version. *** > Coverage Assumptions We would like to clarify that coverage assumptions are essential and widely used in the theory of offline RL (see discussion in Section 2.3). Our paper provides results on two kinds of coverage assumptions: in addition to $C_{\sf s,a}(\Pi,\pi_{\sf off})$ (coverage of all $\pi$ in $\Pi$) used in Section 3., we also have results in Appendix C that rely solely on $C_{\sf s,a}(\pi^*,\pi_{\sf off})$ (coverage for a single good policy $\pi^*$). We believe the latter notion should be considered practical even in the LLM regime, because it only requires the data distribution to cover a **single** good policy. For example, $\pi^*$ could be the best-of-N policy. In this case, $C_{\sf s,a}(\pi^*,\pi_{\sf off})$ is bounded by N. *** > Lack of Empirical Validation / Disconnect from Practice Thanks for pointing this out. We suspect that there might be some misunderstanding regarding the terminology. In this paper, we focus on statistical complexity by using an information-theoretic perspective to distinguish outcome/process supervision—that is—whether the external supervision signals are trajectory-wise or step-wise (see Section 2.2 for detailed explanation). Therefore, recent empirical results that learn PRMs from outcome labels [1,2] are actually outcome supervised in our language (because these algorithmic interventions do not benefit the statistical limits), which supports rather than contradicts our theoretical results. Our theory also coincides with the recent findings from DeepSeek-R1 and indicates that outcome annotation only induces a poly(H) additional cost of statistical complexity compared to step-by-step annotation (which can be easily bypassed if we have an “almost-free” rule-based outcome reward). *** > Limited Intuitive Explanations We agree the proof of Lemma 3 can be complicated and counterintuitive. We have re-written the proof sketch and will update it in our future version. Due to space constraints, we only include the most important intuition below: The key difficulty of implementing a trajectory-level change of measure is that small values of $|f(\tau)|$ do not necessarily ensure small values on every prefix $\tau_{1:h}$ or suffix $\tau_{h+1:H}$ (since cancellations could occur; for example, $a + b = 0$ does not imply $a = b = 0$). However, the Markov property ensures that if \(f\) has a small second moment under $\pi_{\sf off}$ and state $s_h$ is frequently visited, then $f$ cannot have high variance on either the prefix or the suffix through $s_h$. This is because high variance on any component would imply high variance for the entire trajectory. From that, we can apply the change of measure over these frequently encountered states from the offline policy to the objective policy, only paying the state-action concentrability. Afterward, we show that if all states along a trajectory satisfy the prefix/suffix trajectory low-variance property mentioned above, then the absolute value of the total rewards can be upper bounded as well. *** > about advantage function result (Section 4) There has been some work that explicitly validates the idea of using the advantage function in reward models [3], demonstrating an advantage over using the Q-function as a reward. However, it is still unclear which approach is theoretically correct, as using the Q-function as a reward also shows strong empirical potential [1,2]. In Section 4, our motivation is to clarify this A vs. Q debates: we formally prove that using the advantage function of any policy as a reward is correct in principle, whereas using the Q-function can fail in certain cases. *** > Decision making with partial observability Thank you for bringing this to our attention. This is an interesting research direction, and we plan to explore it in our future work. *** References: [1] Luo, Liangchen, et al. "Improve mathematical reasoning in language models by automated process supervision." arXiv preprint arXiv:2406.06592 2 (2024). [2] Wang, Peiyi, et al. "Math-shepherd: Verify and reinforce llms step-by-step without human annotations." arXiv preprint arXiv:2312.08935 (2023). [3] Setlur, Amrith, et al. "Rewarding progress: Scaling automated process verifiers for llm reasoning." arXiv preprint arXiv:2410.08146 (2024). --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal addressing the concerns raised in my review. After considering your responses, I have the following thoughts: ## On Coverage Assumptions Your clarification about the two types of coverage assumptions is helpful. I appreciate the distinction between all-policy coverage (Csa(π, πoff) for all π) and single-policy coverage (Csa(π*, πoff)). The latter is indeed more practical, especially in LLM settings where coverage of a single good policy (like best-of-N) is more feasible. This addresses one of my main concerns about the practical applicability of your theoretical results. I still believe it would strengthen the paper to explicitly discuss how these coverage assumptions might manifest in practical LLM training scenarios, perhaps with concrete examples. This would help bridge the gap between theory and practice for readers less familiar with these concepts. ## On Empirical Validation and Terminology Thank you for clarifying the terminology. I now better understand that recent empirical approaches that learn PRMs from outcome labels [1,2] are considered "outcome supervised" in your framework, which indeed supports rather than contradicts your theoretical findings. This is an important distinction that could be made clearer in the paper to avoid similar misunderstandings by other readers. Your reference to DeepSeek-R1's findings about the poly(H) additional cost of statistical complexity for outcome annotation compared to step-by-step annotation is interesting and supports your theoretical results. Including this connection explicitly in the paper would strengthen the practical relevance of your work. ## On Intuitive Explanations I appreciate the more intuitive explanation of Lemma 3. Your explanation about how the Markov property ensures that frequently visited states cannot have high variance on either prefix or suffix trajectories helps clarify the counterintuitive nature of the result. This explanation would be valuable to include in the revised paper, as it makes the technical contribution more accessible. ## On Advantage Functions vs. Q-Functions Thank you for clarifying the motivation behind Section 4. Your explanation that this section aims to resolve the "A vs. Q debates" by formally proving that advantage functions are theoretically correct while Q-functions can fail in certain cases is compelling. This motivation could be made more explicit in the paper to better connect this section to the main thesis. The reference to empirical work [3] that validates the advantage function approach is helpful and strengthens the practical relevance of your theoretical findings. ## On Partial Observability I'm glad to hear you're interested in exploring decision-making with partial observability in future work. This would indeed be a valuable extension of your current results. ## Overall Assessment After considering your rebuttal, I maintain my recommendation of "Weak accept" but with increased confidence. Your clarifications have addressed my main concerns about: 1. The practical applicability of coverage assumptions 2. The apparent disconnect between theory and empirical observations 3. The motivation behind the advantage function analysis The paper makes a significant theoretical contribution by challenging conventional wisdom about process vs. outcome supervision, and your rebuttal has helped clarify how these theoretical insights connect to practical applications. For the revised version, I would recommend: 1. Making the terminology distinctions clearer to avoid misunderstandings 2. Including the more intuitive explanation of Lemma 3 3. Explicitly connecting your theoretical results to recent empirical findings 4. Better motivating Section 4 as resolving the "A vs. Q debates" 5. More thoroughly discussing the recent works on automated process supervision These changes would strengthen the paper and make its contributions more accessible and impactful to the broader machine learning community. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful follow-up comment and for acknowledging that our clarifications addressed your concerns. We also truly appreciate your detailed suggestions for improving the paper's accessibility, and we will incorporate them into our revision. Below is our complete version of the improved proof sketch for Lemma 3, and we hope it now provides a more intuitive explanation. *** > ### **Proof Sketch of Lemma 3 (New)** > **Insight I: Trajectory-level bound controls the second moment on prefixes and suffixes** > At first glance, small value $|f(\tau)|$ over the entire trajectory $\tau$ does *not* obviously guarantee that the value of $f$ on either (i) every prefix $\tau_{1:h}$ or (ii) every suffix $\tau_{h+1:H}$ is small. In principle, large positive and large negative portions of a single trajectory could “cancel” each other out, resulting in a small overall sum $|f(\tau)|=|f(\tau_{1:h})+f(\tau_{h+1:H})|$. > Crucially, however, thanks to the *Markov property*, we can argue that if $f$ has small second moment (under $\pi_{\sf off}$) and if a state $s_h$ is visited sufficiently often by $\pi_{\sf off}$, $f$ cannot have high variance on the prefix (leading up to $s_h$) and suffix (following $s_h$). Indeed, if the value of $f$ on the prefix (or suffix) has large variance, then conditioned on passing through $s_h$ that is visited sufficiently often by $\pi_{\sf off}$, the value of $f$ on the entire trajectory also has large variance, which directly implies the large variance (hence, large second moment) of $f(\tau)$. Hence, even though the trajectory-level bound looks coarse, it forces each state $s_h$ to have relatively stable partial sums in both the prefix and suffix directions under $\pi_{\sf off}$. > **Insight II: Layer-by-layer “locking” with only state-action coverage** > Next, we want to argue that if *all* states in a trajectory satisfy the above low-variance property (we call such states “good” states), then the reward of the entire trajectory cannot have large absolute value. We call this the “locking in” property here for brevity. In the following, we argue that “locking in” happens with high probability, even under policy $\pi$. > According to the earlier argument, “bad” states (opposite of “good” states) cannot have large visitation probability under $\pi_{\sf off}$. Then, by the definition of $C_{s,a}(\pi, \pi_{\sf off})$, which upper bounds the probability ratio between $\pi$ and $\pi_{\sf off}$ at any state, we conclude that such bad states also have low probability under $\pi$, up to a factor of $C_{s,a}(\pi,\pi_{\sf off})$. Thus, we avoid exponential blow-up over the horizon because we only “pay” for distribution shift at each individual $(s,a)$, rather than for entire trajectories: $\Pr_{\tau \sim \pi}(\text{bad}) \leq C_{s,a}(\pi,\pi_{\sf off})\cdot\Pr_{\tau \sim \pi_{\sf off}}(\text{bad})$. > Hence, with high probability, $\pi$ visits only “good” states throughout its trajectory, ensuring that each layer $h$ “locks in” a small partial sum (as both its prefix and suffix have low variance). (Footnote: Our formal proof also needs to consider the “good” state-action-state tuples, which are similar to “good” states but involve the $(s_h,a_h,s_{h+1})$ tuple. We omit the details here for brevity, and readers can refer to the full proof in Appendix B.1.) When we stitch these layers from $h = 1$ to $h = H$, the entire sum $f(\tau)$ is guaranteed to have small absolute values. *** On the other hand, we would still like to respectfully reiterate that our submission is fundamentally a theoretical work submitted to the Theory track. Although this paper is motivated by practical questions about different supervision paradigms for LLMs, we believe our core theoretical contributions, especially the Change of Trajectory Measure Lemma (Lemma 3), could stand on their own merit (even without any discussion or connection about empirical papers). This lemma challenges the conventional wisdom that learning from trajectory-wise feedback is intrinsically harder in RL theory (e.g., Section 4.3 of [4]). As Reviewer jQ9F also recognized in their positive evaluation ("lemma 3, is powerful and widely applicable even beyond offline RL... Future works will likely benefit from and build on these results"), the theoretical advancement itself is the major contribution of our paper, rather than providing insights for or explaining practical algorithms. Given the theoretical nature and scope of our paper, we would be very grateful if you could please reconsider your recommendation, especially based on our theoretical contributions like the Change of Trajectory Measure Lemma and its implications. We would be happy to discuss any further questions. *** Reference [4] Zhan, W., Uehara, M., Kallus, N., Lee, J. D., & Sun, W. Provable Offline Preference-Based Reinforcement Learning. In The Twelfth International Conference on Learning Representations.
null
null
null
null
null
null
Subspace Optimization for Large Language Models with Convergence Guarantees
Accept (poster)
Summary: The paper critically examines GaLore, highlighting its convergence limitations and proposing GoLore, a robust variant that ensures stochastic convergence. The findings contribute to improving memory-efficient subspace optimization methods for LLM training. ## update after rebuttal My concerns were addressed in the rebuttal. Claims And Evidence: The authors claimed GaLore's limitation and provided theoretical justification. Methods And Evaluation Criteria: The experiments mainly focus on fine-tuning the GLUE benchmark using pre-trained RoBERTa-Base. In the appendix, they also considers fine-tuning LLaMa 2-7B. Theoretical Claims: First, the authors argue that GaLore suffers from convergence issues under general conditions and provide theoretical evidence. Subsequently, they proved GaLore's convergence when the batch size increases in the order of $\sqrt{T}$. Finally, they proposed their own algorithm, GoLore. Experimental Designs Or Analyses: I think the experiment is not enough. The improvement is minor. Supplementary Material: Yes, I checked some of the proof and experiments in the appendix. Relation To Broader Scientific Literature: They show GaLore's limitation in theory, which is interesting to me. Essential References Not Discussed: Reference is good. Other Strengths And Weaknesses: Weakness: From my perspective, GaLore is mainly developed for the training of LLM. The current experiments only consider the fine-tuning, which may not be enough to support the effectiveness of this algorithm. In the fine-tuning task, the improvement is minor, which weakens the contribution of this work. The proposed algorithm still relies on GaLore in the initial stage in the practical implementation as mentioned by the authors. Strengths: This paper proposed an algorithm for LLM training with convergence guarantee, which enjoys better theoretical properties. Other Comments Or Suggestions: No Questions For Authors: It would be better to include important parameters and discussion on these parameters in the main text. In your analysis, it seems that you didn't consider momentum and Adam cases. It would be better to clarify this. In the conventional SGD algorithm, it relies on diminishing step size to eliminate the stationary gap. In your counter example and proof, can we reduce the stationary gap via reducing the step size? Is it an issue faced by all stochastic algorithms? If your claim that SVD decomposition biases the gradient holds, why does the deterministic GaLore converge (Theorem 5)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging our theoretical findings and for the valuable comments. All questions have been clarified as best as we can, and we are glad to address any further comments or questions. We include all new experiments in this **[anonymous link](https://www.hostize.com/v/pcWmN50BVa)**. **Weakness 1. GaLore is mainly developed for (pre-)training, current experiments only considering fine-tuning are not enough.** We would like to respectfully remind the reviewer that we have already included a pre-training result in Figure 4, where the algorithms are applied to pre-train a LLaMA-60M model on the C4 dataset for 30K iterations. To further address this concern, we have conducted additional experiments on pre-training LLaMA-350M models, with results provided in the anonymous link (Figure 1). These results show that transitioning from GaLore to GoLore leads to a faster decrease in the loss curve, further demonstrating the effectiveness of GoLore in pre-training tasks. **Weakness 2. The improvement in the fine-tuning task is minor.** We respectfully disagree with the reviewer’s assessment. Notably, in the GLUE benchmark, our method achieves an average score of **86.03**, compared to GaLore’s **85.77**. This represents a **threefold reduction** in the performance gap between GaLore and AdamW (86.15). We believe this result highlights the effectiveness of our method, demonstrating that a simple modification can significantly close the performance gap. **Weakness 3. The proposed algorithm still relies on GaLore in the initial stage in the practical implementation.** Our hybrid strategy, which relies on GaLore in the initial stage, aligns with our theoretical insights. The limitation of GaLore arises when anisotropic gradient noise starts to dominate the true gradient, which typically occurs **in the later stages of training**. In the early stage, when true gradients are dominant, GaLore’s greedy strategy remains effective, and no modification is necessary. Empirically, applying GoLore from the very beginning (i.e., GoLore@100%) results in performance similar to that of GaLore, likely due to a slower initial phase. Therefore, we believe the hybrid strategy is preferable, as it ensures both a fast initial convergence and higher accuracy in the later stages. **Question 1. It would be better to include important parameters and discussion on these parameters in the main text.** The primary hyperparameter introduced beyond GaLore is the **switching point**, where we transition to GoLore. We kindly refer the reviewer to our rebuttal to **Reviewer iLWQ (Weakness 1 and 2)** for a detailed discussion, as well as to the **anonymous link (Figure 2 and 3)** for ablation results. **Question 2. It seems that the analysis didn't consider momentum and Adam cases. It would be better to clarify this.** We kindly remind the reviewer that GoLore's **convergence analysis** is based on **momentum SGD**, which explicitly accounts for momentum. Additionally, in our **non-convergence analysis**, Theorem 4 holds for **any optimizer $\rho$**, including Adam, AdamW, and momentum SGD. However, we acknowledge that whether GoLore **converges under Adam** remains an open question. To clarify this, we will revise the conclusion section as follows: >**Original:** A limitation of this paper is that recent GaLore variants, such as Fira, are not readily covered by our analysis framework. >**Revised:** A limitation of this paper is that our convergence analysis framework has not readily covered the use of the Adam optimizer and recent GaLore variants such as Fira. **Question 3. In your counter example and proof, can we reduce the stationary gap via reducing the step size? Is it an issue faced by all stochastic algorithms?** We cannot reduce the gap by decreasing step size. As demonstrated in **Theorem 4**, the non-convergence result holds for **any optimizer $\rho$**, regardless of the algorithm framework or hyperparameters, including the step size. Therefore, the counterexample illustrates an issue that is inherent to **all stochastic algorithms**. **Question 4. If your claim that SVD decomposition biases the gradient holds, why does the deterministic GaLore converge (Theorem 5)?** It may be more accurate to say that GaLore's greedy SVD projection biases the **stochastic gradient**, including the **gradient noise**. Intuitively, the bias introduced by gradient noise will not vanish as the gradient noise itself will not vanish. On the other hand, the bias in the **true gradient** can be bounded, as shown in **Lemma 1**, and this bias vanishes as the true gradient approaches zero. This is why GaLore can still converge in the deterministic case, despite the bias in the gradient. We hope these responses can clarify the reviewer's questions and are more than happy to address any further comments or questions. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification from the authors. I raised my score to 4. But I still have a concern: Could you clarify the advantage or the necessity, or the theoretical insight to update based on SVD decomposition in the early stage? Why is it better to follow this GaLore at the beginning of the training? --- Reply to Comment 1.1.1: Comment: Dear Reviewer ZeQP, Thank you again for acknowledging our previous response and for raising your score. We greatly appreciate your engagement with our work and are happy to provide further clarification regarding your question. Let $G=G_0+E$ denote the stochastic gradient, where $G_0$ is the true gradient and $E$ is the gradient noise, respectively. In the early stages of training, we typically have $\\|G_0\\|_F\gg\\|E\\|_F$, implying that the gradient is nearly noiseless. In this setting, GaLore's SVD-based projection provides an optimal low-rank approximation of $G$, whereas random projections do not exploit this structure. To elaborate, when $\\|G_0\\|_F\gg\\|E\\|_F$, we can approximate $G\approx G_0$. Let $G=U\Sigma V^\top$ be the SVD of $G$, where the singular values satisfy $\sigma_1\ge\sigma_2\ge\cdots\ge\sigma_m$ (assuming $G\in\mathbb{R}^{m\times n} \text{ with } m\le n$). According to the proof of Lemma 1 (Line 801), the GaLore projection matrix $P_a=U[:r]$ satisfies:$$\\|P_aP_a^\top G-G_0\\|_F^2\approx\\|P_aP_a^\top G-G\\|_F^2=\sum\_{i=r+1}^m\sigma_i^2.$$ In contrast, for a random projection matrix $P_o\sim\mathcal{U}(\mathrm{St}_{m,r})$, Lemma 5 (Line 874) yields: $$\mathbb{E}[\\|P_oP_o^\top G-G_0\\|_F^2]\approx\mathbb{E}[\\|P_oP_o^\top G-G\\|_F^2]=\left(1-\frac{r}{m}\right)\\|G\\|_F^2=\frac{m-r}{m}\sum\_{i=1}^m\sigma_i^2.$$ The gap between these two approximations is: $$\frac{m-r}{m}\sum_{i=1}^m\sigma_i^2-\sum_{i=r+1}^m\sigma_i^2=\frac{(m-r)r}{m}\cdot\left(\frac{1}{r}\sum_{i=1}^r\sigma_i^2-\frac{1}{m-r}\sum_{i=r+1}^m\sigma_i^2\right)\ge0,$$ with equality only when $\sigma_1=\cdots=\sigma_m$. As observed in GaLore (Zhao et al., 2024), gradients in deep learning exhibit low-rank structure, meaning $\frac{1}{r}\sum_{i=1}^r\sigma_i^2$ is often significantly larger than $\frac{1}{m-r}\sum_{i=r+1}^m\sigma_i^2$, making GaLore particularly effective in early training when gradients are less noisy. We hope this explanation clarifies the theoretical advantage of using SVD-based projection in the early stages. Please let us know if you have further questions—we’d be happy to continue the discussion. **Best regards**, Authors of Submission 9646
Summary: This paper examines the convergence properties of subspace optimization algorithms for LLM, focusing on GaLore. GaLore is known for memory efficiency in pre-training and fine-tuning LLM, this paper shows that it does not always converge under standard stochastic optimization settings. The authors substantiate this claim with a counterexample and explore conditions for its convergence, such as large mini-batch sizes or deterministic gradients. To address the limitations of GaLore, the paper introduces a new algorithm called GoLore (Gradient random Low-rank projection). Unlike GaLore's SVD-based projections, GoLore uses random projections, enabling convergence even with small mini-batches in stochastic settings. Theoretical analysis demonstrates GoLore's superior convergence properties under general conditions. ## Update after Rebuttal As mentioned earlier, I maintained a positive score, and I hope the next revision will incorporate the discussed changes. Claims And Evidence: The claims in the paper regarding the non-convergence of GaLore and the convergence guarantees of GoLore is supported by a combination of theoretical analysis, illustrative counterexamples and empirical evaluation. Methods And Evaluation Criteria: Yes it does. Theoretical Claims: I checked a few convergence theorems, such as theorem 10. 12 and 18, they look fine. Experimental Designs Or Analyses: The author provides a counter example for convergence guarantee, and they validate this with experiments in Figure 1. Supplementary Material: mentioned in theoretical claims Relation To Broader Scientific Literature: The paper addresses the issue of memory efficient training for LLM along with their guaranteed convergence. Essential References Not Discussed: References seems to be well cited Other Strengths And Weaknesses: Positives: The paper supports their claim with rigorous theoretical analysis. The paper is well presented. Negatives: The dependence of the assumptions may limit broader applicability. In a noisy setup, a random projection may add more uncertainty in the final claims. Other Comments Or Suggestions: As mentioned before that paper has been well presented and there are no major typos or inconsistencies there. Questions For Authors: Can a sampling based row rank approximation be used instead of random projection of GOLORE or singular value decomposition of GALORE? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging our theoretical results and appreciate the efforts made to check our convergence proofs. All questions have been clarified as best as we can, and we are glad to address any further comments or questions. We include all new experiments in this **[anonymous link](https://www.hostize.com/v/pcWmN50BVa)**. **Weakness 1. The dependence of the assumptions may limit broader applicability. In a noisy setup, a random projection may add more uncertainty in the final claims.** We respectfully disagree with the reviewer’s concern that GoLore's broader applicability is limited by the dependence on assumptions, particularly regarding the noise scale. While introducing random projection may add more uncertainty, we argue that such randomness is, in fact, essential to mitigate the impact of gradient noise in noisy settings. To see it, we first consider the deterministic projection used by GaLore. Let $G=G_0+E$ represent the stochastic gradient, where $G_0$ is the true gradient and $E$ is the gradient noise. In noisy settings where $\\|E\\|\gg\\|G_0\\|$, GaLore deterministically select the projection matrix $P$ using the information of $G\approx E$. The noisier $E$ is, the noisier $P$ becomes. Moreover, GaLore's Top-K selection can degrade the unbiasedness of $E$, potentially leading to non-convergence. In contrast, GoLore randomly samples $P$ from the same distribution, regardless of how noisy $E$ is. This approach shields $P$ from being influenced by the dominant gradient noise, ensuring its stability regardless of the noise level in $E$. Furthermore, GoLore's random projection maintains the unbiasedness in subspace selection, as shown by $\mathbb{E}[PP^\top]=\frac{r}{m}\cdot I$ in Lemma 5. Additionally, the assumptions regarding gradient noise in our analysis are standard in stochastic optimization and are widely accepted in convergence studies. Therefore, we believe that our theoretical results are not unduly restricted by these assumptions and that random projection enhances the algorithm’s robustness in noisy settings. **Question 1. Can a sampling based row rank approximation be used instead of random projection of GoLore or singular value decomposition of GaLore?** Thank you for the interesting question. To explore this, we have additionally evaluated the performance of a sampling-based method as follows: Let $G\in\mathbb{R}^{m\times n}\ (m\le n)$ be the stochastic gradient matrix. Similar to GaLore, we first compute SVD of $G$, obtaining $G=U\Sigma V^\top$. However, instead of selecting the first $r$ columns of $U$ as GaLore does, we sample $r$ columns with probabilities proportional to the corresponding singular values $\sigma_i$'s. As shown in Figure 1 in our anonymous link, the performance of this sampling-based strategy is similar to that of GaLore's, outperformed by our GoLore method. This is potentially because such an importance sampling strategy cannot lead to unbiased projection matrices, and still suffer from the bias induced by gradient noise. We thank the reviewer again for the careful comments and valuable suggestions. We hope these responses can clarify the reviewer's questions and are more than happy to address any further comments or questions. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed response to the queries/concerns by the authors. I maintain my overall positive score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Eita, Thank you for your time and consideration. We appreciate your thoughtful review and your positive assessment of our work. Authors of Submission 9646
Summary: The paper shows that the subspace projection of Galore can be biased when approaching to local minimizer, where the principle component of the projection matrix mainly capture the information of the stochastic noises. Built upon this insight, the paper proposes a method named Golore, which samples the projection matrix from stiefel manifold for ensuring the unbiasness of the gradient when the training is about to converge, i.e. close to local minimizer. Claims And Evidence: Yes. The claims are supported by the proof and numerical results. Methods And Evaluation Criteria: Yes, the C4 pretraining task and GLUE finetuning task are standard benchmarks for evaluating memory-efficient optimization algorithms. Theoretical Claims: I have not went through all the proofs, but I am farmiliar with the related techniques. The theorem make sense under the given assumptions. Experimental Designs Or Analyses: Yes. Supplementary Material: I have went through the proof of main theorem. Relation To Broader Scientific Literature: The paper explains why Galore can be much slower than Adam in the late training stage from gradient bias perspective. Both theoretical researcher and practioners may find insight from this paper. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** * The paper reveals that Galore's slow convergence in the late training stage is due to that priciple components are dominated by the stochastic noise. The results are justified by (non)-convergence results. * The proposed strategy for avoiding biasness is relatively easy to implement. Experiments on LLM's pre-training demonstrates that adding Golore phase exhibits faster convergence than running Galore alone. **Weaknesses** * The choice of switching point to Golore may have huge impact on the convergence. However, the paper does not provide guideline on how to choose this hyperparameter. The ablation study on the subspace update frequency $\tau$ is missing as well. Other Comments Or Suggestions: It may be informative to plot the norm of gradient bias of both Galore and Golore (may be conducted under a relatively small dataset for cheap evaluation of full gradient). Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging our theoretical results and experimental designs, as well as the detailed comments and suggestions. All questions have been clarified as best as we can, and we are glad to address any further comments or questions. We include all new experiments in this **[anonymous link](https://www.hostize.com/v/pcWmN50BVa)**. - **Weakness 1. The paper does not provide a guideline on how to choose the switching point.** We appreciate the reviewer’s valuable feedback. In our experiments, we manually selected the switching points from $\\{20\\%,30\\%,40\\%,50\\%\\}$ based on intuition and prior experience. Specifically, we choose an earlier switching point if we expect the algorithm to converge more quickly to the solution's neighborhood and a later one if we anticipate slower convergence. To provide a broader guideline, we offer the following insights: - If access to the true gradient is available, a reasonable switching point would be when gradient noise begins to dominate the true gradient. - If only stochastic gradients are available, and assuming the gradient noise has a roughly constant scale, this dominance can be estimated by monitoring whether the norm of the stochastic gradients falls below a certain threshold. This threshold serves as a hyperparameter that depends on both the training task and the batch size used in the algorithm. - The optimal switching point is task-dependent. Empirically, it is recommended to switch when the rate of decrease in the loss curve starts to slow down. - **Weakness 2. The ablation study on the subspace update frequency is missing.** We appreciate the reviewer’s suggestion. We did not initially include an ablation study on the subspace update frequency $\tau$ because the original GaLore paper (Zhao et al., 2024) states that values of $\tau$ between 50 and 1000 have minimal impact on performance. Based on this, we set $\tau=500$ to: - Reduce computational overhead, and - Ensure a sufficient number of GoLore projection steps to enhance performance. However, in response to the reviewer's concern, we have provided additional ablation results in Figure 3 in the anonymous link. These results confirm that varying $\tau$ within the range of 50 to 1000 has negligible effect on performance. - **Suggestion 1. It may be informative to plot the norm of gradient bias of both Galore and Golore.** We appreciate the reviewer’s valuable suggestion. In response, Figures 4 and 5 in the anonymous link have additionally plotted the gradient bias norms in our experiments, demonstrating that GoLore achieves a smaller gradient bias. We thank the reviewer again for the careful comments. We hope these responses can clarify the reviewer's questions and are more than happy to address any further comments or questions.
null
null
null
null
null
null
null
null
Long-Form Speech Generation with Spoken Language Models
Accept (oral)
Summary: This paper aims to develop a speech language model (LM) capable of modeling long-form speech. The key distinctions of this paper compared to previous studies are as follows: (1) Introducing State Space Models (SSMs) into the speech LM at the speech-token level, resulting in improved perplexity (PPL) and other evaluation metrics compared to existing baselines. (2) Highlighting shortcomings in previous evaluation methods for speech LMs, proposing a more effective evaluation approach, and introducing a new benchmark, "LibriSpeech-Long," specifically designed to evaluate these capabilities. ## update after rebuttal As I mentioned in my official comment below, I have read the authors' rebuttal and slightly increased my score accordingly. Once again, thank you for your prompt and thoughtful response. Claims And Evidence: The authors have established several criteria that a speech LM designed to model long-form speech should satisfy. They also described how State Space Models (SSM), along with their proposed model and dataset, were employed to meet these criteria. Methods And Evaluation Criteria: I have a few questions below regarding the evaluation scheme, specifically parts that were slightly unclear or raised my curiosity. Aside from these points, everything else seems appropriate. Theoretical Claims: Since this paper focuses primarily on applying SSM, there do not appear to be specific issues regarding proofs or theoretical claims. Experimental Designs Or Analyses: I have some questions specifically about the evaluation rather than the experimental design itself. I'll address these points separately below. Supplementary Material: I have reviewed the evaluation details provided in the appendix, including those related to human evaluation and the use of an LLM-as-a-judge. Relation To Broader Scientific Literature: I agree with the authors' contribution in integrating SSM into a token-level speech LM, which has not been addressed by previous studies. Additionally, highlighting the limitations of existing evaluation methods and proposing an improved evaluation metric also constitute contributions of this paper. Essential References Not Discussed: I found no issues with references. Other Strengths And Weaknesses: ### Strengths 1. Introducing SSM at the speech-token level to enable effective long-form modeling. 2. Proposing both a benchmark and novel metrics designed explicitly for evaluating long-form speech generation. 3. Overall, the approach is reasonable, and the authors have leveraged the advantages of SSM for long-form generation. ### Weaknesses 1. Regarding the N-MOS (ignoring grammar and content) and SpkrSim metrics, I believe these primarily reflect the qualities inherited from speech tokens proposed by previous works such as USM-v2 or SoundStream, and the speech decoder (SoundStorm), rather than being novel contributions of this paper. - If the goal was to compare these aspects, the baselines should also have been trained as personalized speech synthesis models, or similarly, the USM-v2 tokens used by baselines should have been decoded with a separately trained vocoder such as HiFi-GAN, similar to the previous works. (The controlled variables should have been introduced only in the proposed SSM-based model.) 2. Regarding the perplexity (PPL) results, the authors show that SSM is effective compared to Transformers. However, other metric, particularly for the N-MOS results in Tables 2 and especially 3, I notice the confidence intervals are notably large. I am curious why these confidence intervals are so wide, what the corresponding p-values are, and whether it is indeed possible to conclusively determine the advantage of SSM over Transformer-based models based on these numbers. 3. In terms of novelty, I personally feel the contributions might be somewhat limited. Aside from introducing a new benchmark and proposing new evaluation metrics, the main impression is that the paper simply applied SSM to speech tokens. If I have missed any additional structural modifications or specialized adaptations specifically designed for speech data within the proposed model, I would appreciate further clarification from the authors. Other Comments Or Suggestions: The paper is well-written, and I did not notice obvious typos. Questions For Authors: 1. In recent language model trends, scalability has emerged as an important factor. I am curious about how the performance of the proposed SSM-based approach changes as the model scales up in Speech LLMs. Specifically, would performance significantly improve with increased scale, or is there a clear performance upper bound? 2. Additionally, during speech generation, the authors mention generating audio chunks with overlapping segments for memory efficiency, then concatenating them. I would like to know whether this concatenation process introduces any artifacts at the boundaries. If artifacts exist, it would be helpful to understand their nature. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough reading of our work! > W1: SpkrSim and N-MOS reflects speech tokens; decoupled tokenizer/vocoder baselines? **We agree N-MOS and SpkrSim are primarily due to tokenizer/vocoder**, hence why it felt unnecessary to isolate them from the backbone architecture change as these metrics would focus on our "acoustic" improvements. Then, to isolate backbone improvements from tokenizer/vocoder, we introduced SpeechTransformer as a baseline, sharing the same speech tokenizer (USM-v2) and vocoder (SoundStorm) with SpeechSSM. Also, despite using existing tokens, **we view our _deliberate integration_ as the contribution**. Other schemes entangle acoustic and semantic properties, and so their N-MOS is tied to long-form ability (-T); ours is not due to our fixed context and semantic-to-acoustic stage. Other models have speaker drift, while our speaker-invariant tokens (USMv2) *and* speaker-prompted acoustic stage ensure even and high SpkrSim. > W2: N-MOS sufficient for SSM over Transformer? Due to shared tokenizer/vocoder/etc., **the N-MOS gap being narrow between SpeechTransformer and SpeechSSM is _expected_**. This is also why our proposed metrics are important, such as Win%, PPLs at 4x train (Tab. 4) and SC-L (Fig 5), isolating the semantic delta of SSM vs. Transformer. Regardless, **we’ve greatly expanded our human evals. We report(ed) 99% CIs**, where each model had 200 items, 3 ratings each (incorrectly ‘5’ in Appendix C). We 2x this to get **[tighter intervals for Table 2](https://raw.githubusercontent.com/demo474/ID4403_Demo/refs/heads/master/figures/rebuttal-updated-table2.png)**. Our N-MOS-T was a different, smaller rater pool due to technical/time issues. We've now aligned it with short-form N-MOS to give **[far better 99% intervals for Table 3](https://raw.githubusercontent.com/demo474/ID4403_Demo/refs/heads/master/figures/rebuttal-updated-table3.PNG)**: 1200 ratings per (model, slice). While magnitudes in the top range decreased, trends remain. In particular, **SpeechSSM now clearly improves over SpeechTransformer in N-MOS-T**; their 99% intervals do not even overlap in two of the columns. This also **shows N-MOS-T is affected by the backbone model also.** > W3: Simply applied SSM; modifications/adaptations specifically designed for speech data? As our primary contribution is advancing long-form speech generation over tens of minutes, and being the first work on the topic, we outlined the task's minimum requirements (Section 3, first paragraph) and **intentionally meet them in a straightforward way**. Our architecture, benchmark, and evals _stem from the task_ -- issues with the obvious SpeechTransformer baseline, issues with metrics that capture acoustic issues rather than semantic quality, etc. This did involve key modifications: using hybrid SSMs from text LMs; disentangling speaker identity, acoustic, and semantic aspects; windowed tokenizing and decoding; repeated padding and avoiding implicit EOSes; using NoPE. Though these are not architectural changes in the narrow sense, they contribute to length generalization, enabling the model to effectively generate speech of unbounded length. > Q1: Would performance significantly improve with increased scale? We expect performance to continue improving beyond 2B. This expectation is supported by the Griffin paper, the underlying hybrid recurrent architecture used by RecurrentGemma and thus SpeechSSM, which showed consistent gains across 1B, 3B, 7B, and 14B models on downstream text tasks. Towards this, **we have started training a SpeechSSM 9B**, initialized from RecurrentGemma 9B which was unavailable at the start of our work. **[Its training curve versus 2B is very promising and suggests headroom](https://raw.githubusercontent.com/demo474/ID4403_Demo/refs/heads/master/figures/rebuttal-9b-inprogress.png).** > Q2: Would concatenation process introduce artifacts at the boundaries? Yes, **there are artifacts due to waveform splicing, but they are very subtle due to overlapping,** as the mismatch in the contexts used to generate audio before vs. after the splice only starts 50 tokens (2s) away. Since SoundStorm uses 3s for speaker prompting, it synthesizes in 27s chunks, whose last 4s are overlapped with the next chunk. Hence, **when boundary artifacts occur, they occur at `25 + 23*N` seconds.** In the third file on the [demo page](https://demo474.github.io/ID4403_Demo/), at :25, the male voice gets briefly louder. We take 5s windows centered at the boundaries (“at concat”) and 5s windows at the chunk centers (“not at concat”). There are 10 of each window type in a 240s generation, giving 50 x 10 x 2 ratings/item = 1000 ratings per type. At 99% CI we get 4.05 ± 0.07 vs. 4.07 ± 0.07, showing **artifacts do not appear in MOS, [nor do they appear systematically in the categories raters can flag](https://raw.githubusercontent.com/demo474/ID4403_Demo/refs/heads/master/figures/rebuttal-concat-artifacts.PNG).** --- Rebuttal Comment 1.1: Comment: Thank you for the kind response. The authors have addressed all the concerns I raised. Therefore, I will slightly increase my score.
Summary: This paper introduces the first speech-language model ***SpeechSSM***. Two new metrics and a new benchmark are proposed. Experiments and analysis are comprehensive. Claims And Evidence: **Yes** Methods And Evaluation Criteria: **Yes** Theoretical Claims: **Yes** There is no theoretical claim in this paper. Experimental Designs Or Analyses: **Yes** The authors conduct experiments for short- and long-form speech generation separately. The experiments are well-designed. Comparisons and evaluations are comprehensive. Supplementary Material: **Yes** The generated speech audios are available in the supplementary material. The results are very competitive. Relation To Broader Scientific Literature: The key contributions of this paper include: 1. **SpeechSSM** -- the first speech-language model for long-form speech. 2. Two new metrics and a new benchmark for long-form speech evaluation. The related works seem to be discussed in the Section 2. Essential References Not Discussed: **Maybe not** I thought the related works were discussed. Other Strengths And Weaknesses: **Strengths:** 1. This paper is well-written. The figures and tables are clear and easy to understand. 2. The results generated by the proposed method are very competitive. 3. As the authors claimed, the proposed methods significantly increase the audio length during inference and training, which are meaningful for real-world applications. 4. The two new metrics and the new benchmark designed for long-form speech are useful. **Weaknesses:** 1. (Minor) The architecture of SpeechSSM appears somewhat straightforward and could benefit from more innovative design considerations. 2. (Minor) The windowed tokenization and decoding approach represents a compromise solution rather than an optimal one. Other Comments Or Suggestions: There is no more comments from the reviewer. Questions For Authors: The reviewer acknowledges that the weights are currently not released. Will the model and weights be released in the future? This is important for further research. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review and support! > (Minor) The architecture of SpeechSSM appears somewhat straightforward and could benefit from more innovative design considerations. As our primary contribution is advancing long-form speech generation over tens of minutes, and being the first work on the topic, we outlined the task's minimum requirements (Section 3, first paragraph) and **intentionally meet them in a straightforward way** so that we could also focus on dataset and metric design. Nonetheless, this already involved several key design choices, such as: using hybrid SSMs from text LMs; disentangling speaker identity, acoustic, and semantic aspects; windowed tokenizing and decoding; repeated padding and avoiding implicit EOSes; using NoPE. By establishing a strong baseline and benchmark, we hope future work can focus on new and innovative architectures for long-form speech. > (Minor) The windowed tokenization and decoding approach represents a compromise solution rather than an optimal one. The overlap windowing approach was a practical solution to effectively utilize a bounded tokenizer and decoder to achieve unbounded speech generation. Future work could explore streaming tokenizers to handle extended speech generation, but these have to train from scratch. In contrast, our method works on default non-causal tokenizers like HuBERT and USMv2. **It is close to optimal from the viewpoint of "working out of the box", and also from mitigating boundary artifacts**, as we quantify in the end of reply to Reviewer QbKA (#4). We consider our design and comprehensive details (e.g. avoiding implicit EOSes, where to speaker prompt) a contribution in its own right. > The reviewer acknowledges that the weights are currently not released. Will the model and weights be released in the future? This is important for further research. **SpeechSSM is largely RecurrentGemma finetuned on a public dataset** and so we believe it is easily reproduced. As USMv2 is not widely available, another speech tokenizer would be required, of which there are now many viable candidates (e.g., SpeechTokenizer, Mimi). Due to safety considerations and a proprietary dataset we are unlikely to release SpeechSSM-X. While for institutional reasons we can't promise a full release of the (LibriLight) SpeechSSM, **we could release finetuning code, e.g., atop [the Gemma finetuning library](https://gemma-llm.readthedocs.io/en/latest/colab_finetuning.html) and a public speech tokenizer**, to support further research. --- Rebuttal Comment 1.1: Comment: The reivewer has read the rebuttal from the authors. All the concerns from the reviewer have been addressed. The reviewer decide to keep the final rating as 4 (accept).
Summary: The paper proposes SpeechSSM, a spoken language model designed for long-form speech generation. It is based on state-space models enabling efficient generation with constant memory consumption. To evaluate the model on long generations the authors propose LibriSpeech-Long benchmark and new evaluation metrics including embedding-based similarity, LLM-judged scoring, and time-stratified quality assessments. Results show that SpeechSSM performs on par with existing spoken language models for short-form generations and outperforms them on long-form speech generation. Claims And Evidence: The paper makes the following main claims: 1. Model-wise: SpeechSSM is the first state-space language model for speech and provides long-form speech generation 2. Evaluation-wise: The paper introduces new evaluation metrics for long-form speech generation, including side-by-side LLM-as-judge scoring and time-stratified quality assessments. The paper provides clear and convincing evidence for both claims through experimental comparisons. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper are well-suited for the problem of long-form speech generation. To evaluate the model's performance the authors propose four evaluation methods: 1. LibriSpeech-Long benchmark: To assess the model’s performance on extended speech continuations the authors reformatted the original LibriSpeech dataset into longer 3-4 minute segments creating the LibriSpeech-Long benchmark. 2. Embedding-Based Metrics: Semantic embedding-based metrics are used to evaluate the content preservation of the generated speech. 3. LLM-as-Judge: Large language models are utilized as judges to provide side-by-side evaluations of generated speech samples. 4. Time-Stratified Evaluations: The paper introduces time-stratified quality assessments to analyze the consistency of speech over different time intervals within the long-form generation. Theoretical Claims: N/A Experimental Designs Or Analyses: Two main experiments have been performed: 1. Short-Form Continuation Experiments: The model is evaluated on 7s continuations given a 3s prompt assessing short-term coherence and speaker similarity. 2. Long-Form Generation Experiments: The model generates up to 16 minutes of speech from a 10s prompt, evaluating semantic coherence, fluency, and speaker consistency across extended durations. Main weaknesses: 1. No human evaluation is performed beyond N-MOS. While the authors employ LLM-as-a-judge, a human LLM correlation analysis would be very interesting. 2. Limited dataset diversity in training and evaluation. The proposed model is trained and evaluated on the same audio/speaker conditions (e.g. background/noise, neutral speech) whereas the other models have been trained on different data. If experiments on different datasets are provided it would demonstrate the robustness of your approach. Supplementary Material: I reviewed all parts of the supplementary material which provide additional results and implementation details, including the prompt used for N-MOS evaluation and the data collection process. It also includes an LLM-as-a-judge example and generated speech samples for qualitative analysis. Finally, the supplementary material also provides a demo page showcasing SpeechSSM's effectiveness including examples of long-form speech generation. Every section helps to improve the clarity of the paper Relation To Broader Scientific Literature: The paper's key contributions align closely with advancements in the broader scientific literature. In particular, while state-space models have been used in speech generation, none have been applied as a spoken language model operating directly on acoustic tokens. Unlike prior transformer-based approaches such as TWIST and AudioLM, the proposed method leverages state-space models for more efficient long-form generation with constant memory complexity. Additionally, the paper builds upon standard evaluation metrics for short-form generation and extends them for long-form evaluation. First, it introduces the LibriSpeech-Long dataset, created by reformatting the LibriSpeech dev and test sets to enable longer speech evaluations. To evaluate long-form generated samples it proposes new evaluation metrics for assessing the quality and coherence of extended speech generation (LLM-as-Judge, embedding-based metrics and time-stratified evaluation). Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths 1. Efficient and scalable approach. SpeechSSM has constant memory complexity and linear-time inference. 2. New evaluation metrics and the LibriSpeech-Long benchmark are important long-form speech research. Weakness 1. Ablation analysis is missing. How different architectural choices (e.g. hyperparameters) impact model performance? 2. Only audiobook generations are considered. How would the model perform on other scenarios like dialogues? Other Comments Or Suggestions: NA Questions For Authors: 1. How does the choice of windowing parameters impact performance? How do different token widths and overlap sizes influence the model’s performance? 2. How does your model perform with different prompt durations? For example, why did you choose 10 seconds for long-form generation instead of 3 or 5 seconds? 3. Have you evaluated SpeechSSM on more expressive speech datasets, such as EXPRESSO or EmoV? 4. Have you tested the model for longer than 16min speech generation (e.g. 60 min)? Does it maintain semantic coherence and speaker identity? 5. How does your model handle non-linguistic vocalizations such as laughter or sighs? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review and thoughtful feedback about our work! > **Major Weaknesses:** > > No human evaluation beyond N-MOS Note **we have strengthened our MOS results**; see response to Reviewer QbKA (#4). > LLM-as-judge vs. human correlation analysis LLM judges align closely with human preferences (over 80% agreement) in pairwise comparison (Zheng et al, 2023), with SxSes more stable than single-answer grading. When we started, our LLM as judge (Gemini 1.5 Pro) was [the top generative LLM-as-judge](https://x.com/aseveryn/status/1793605627232833887) on [RewardBench](https://huggingface.co/spaces/allenai/reward-bench), with a score of **88% (roughly viewable as human agreement %)**. Our rubric is based on story generation evaluations (Section 5.2 + Appendix D), a reading/style comprehension task we believe is similar to tasks evaluated by these works. That said, a correlation analysis specific to our transcript SxS would be informative; we'll investigate this for the camera ready. > Limited dataset diversity... proposed model is trained and evaluated on the same audio/speaker conditions ... other models have been trained on different data. Experiments on different datasets... > > **Other Weaknesses:** > > Only audiobook generations... other scenarios like dialogues? We restricted our main model to LibriLight for fair comparison with non-Spirit LM works. However, to demonstrate diversity, **in 6.4 we also trained a version (SpeechSSM-X) on extemporaneous podcast-style monologue.** Our model **replicated this data's expressive and varied nature**, as can be heard in the [SpeechSSM-X samples on our demo page](https://demo474.github.io/ID4403_Demo/). That said, our models generate a single voice as they disentangle speaker identity (a design contribution of our work). **We see no blockers to training a model with entangled tokens to model many voices in dialogue.** > Ablation... architectural choices (e.g. hyperparameters) impact model performance? We ablated train/test lengths and Transformer vs. SSM. **We also now ablate text LM initialization and model size**; see Reviewer k44A (#1) and Reviewer QbKA (#4) rebuttals for details. We could explicitly discuss more things we tried e.g., RoPE length issues or overlap parameters in the camera ready. > **Q:** EXPRESSO, EmoV? To quantify SpeechSSM-X's performance, we are trying Expresso and could share results during this period if completed in time. We are considering context-based continuation (e.g. ["Subjective metrics" by Sesame](https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice)) but welcome suggestions. > **Q:** window parameters, [tokenizer] overlap, and performance For our top concern of unbounded generation, we found that avoiding "implicit EOSes" was key (Section 3), so we used the largest window supported by USMv2; we only overlap to avoid edge tokens. It's possible that intentionally smaller windows may improve performance, which we leave to future work. > token widths and performance Longer audio token widths are an active research area which we believe is driven by poor long-form performance of current models. **We intentionally focus on architecture/training improvements _instead_ of larger tokens**, which we'll clarify in the camera ready. Future work can combine both approaches. > [synthesizer] overlap and performance We tested 0s, 2s, 4s, and 8s overlaps, and found that **no overlap produces artifacts at boundaries**. The smallest overlap we tried already significantly reduced these artifacts, so we chose it. See final part of our response to Reviewer QbKA (#4) for metrics. > **Q:** different prompt durations? 10 seconds for long-form generation instead of 3 or 5 seconds? **Shorter prompts are insufficient initial context to act as anchor** to compare against for long-term coherence, e.g., our SC-L metric of semantic similarity between the prompt and segments of the continuation. A prompt of at least 10 seconds (1-2 sentences) differentiates models (Figure 5) and constrains the space of valid continuations for LLM judging. > **Q:** >16min speech generation; semantic coherence and speaker identity? **We have generated with SpeechSSM up to 30 minutes** and see no issues going further. The model maintains speaker identity and speech naturalness indefinitely (as expected from moving speaker modeling to the acoustic layer), with a gradual semantic drift as one may expect from SC-L scores in Figure 5. However, paragraph+ coherence issues [seen in our 16min samples](https://demo474.github.io/ID4403_Demo/) should be improved first before such generations are worthwhile. > **Q:** non-linguistic vocalizations such as laughter or sighs? As long as the model is trained on data with such vocalizations, it can learn to reproduce them. **In the third [demo page](https://demo474.github.io/ID4403_Demo/) sample, from 45s - 1:15s, there are sighs, exhalation, and fillers (“uh”, “um”, “uh-huh”).**
Summary: The paper proposes SpeechSSM, a spoken language model based on state-space models that can generate long-form audio in a single pass. The paper also proposes reference-based semantic similarity and LLM-based pairwise judgment to evaluate the generated long-form audio. They also released a new dataset LibriSpeech-Long for evaluating speech continuation. The SpeechSSM outperforms the baselines in both short-form and long-form generation. The new metrics and dataset enable better evaluation for long-form speech generation. ## update after rebuttal: I decided to keep my score. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem at hand. Theoretical Claims: The paper does not have any proofs for theoretical claims. Experimental Designs Or Analyses: The baselines are extensive and comparable. The analysis on semantic coherence and error modes is comprehensive. The Librispeech-Long benchmark is well-designed for the task. Supplementary Material: I checked the additional results and MOS evaluation details, further strengthening the claims and providing more insights on how the model performs on long-form speech generation. Relation To Broader Scientific Literature: The core contribution of this work is built on the developments in many different areas, most importantly language models and speech representation learning. It deals with the problem of spoken language modeling, with the core challenge being the choice of the right model and speech representation for modeling. Previous approaches, like GSLM, used transformers to model speech tokens from HuBERT. Later, speech representation modeling improved. SoundStorm, for example, incorporated hierarchical modeling with RVQ which inspired AudioLM to do better modeling. However, the previous approaches were not good at modeling long speech sequences. That was until state space models like S4 arrived and substantially improved time-series modeling for long sequences. For the evaluation methods introduced in this paper, it is also backed by the recent advancements in large language models (LLMs). Many papers are now starting to make use of LLMs to build evaluation metrics for their methods. Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: How important is the text-based initialization (using RecurrentGemma-2B) for speechSSM's performance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your comprehensive reading of our work! > How important is the text-based initialization (using RecurrentGemma-2B) for speechSSM's performance? Following the insights from Hassid et al. (2023)'s TWIST model, which demonstrated that initializing spoken language models with pretrained textual knowledge enhances semantic coherence and fluency, we initialized SpeechSSM with RecurrentGemma 2B IT’s weights. As both SpeechTransformer and SpeechSSM are initialized with first-generation Gemma 2B models, we believe **the text-based initialization is not important to SpeechSSM’s gains over SpeechTransformer**. Furthermore, Spirit LM is initialized with Llama 2 7B, which is comparable to Gemma 2B (Table 6 of [the Gemma paper](https://arxiv.org/pdf/2403.08295) gives scores of 46.9 vs. 45.0 averaged over 18 benchmarks), so **we believe it is not important to our gains over Spirit LM either.** Though we did not prioritize this ablation (as LM initialization is “free” and benefits were shown by TWIST and Spirit LM), it is an interesting question and new for speech SSMs. **We have trained a SpeechSSM 4m 2B with random initialization.** [Here is the training plot](https://raw.githubusercontent.com/demo474/ID4403_Demo/refs/heads/master/figures/rebuttal-2b-randominit.png). Despite starting from a higher training loss (expected), it converges to a lower training loss (unexpected)! We suspect that our removal of RoPE may have harmed transfer, as keeping RoPE gives a similar loss curve (see orange). Note **these are audio token losses**; the generated _text_ may be worse. There are also considerations beyond loss; early experiments suggested that RoPE prevented unbounded length generation, a key requirement for our system. **We will validate this ablation and share results during the discussion period if possible.**
null
null
null
null
null
null
Topology-Aware Dynamic Reweighting for Distribution Shifts on Graph
Accept (poster)
Summary: This paper introduces Topology-Aware Dynamic Reweighting (TAR), a novel framework designed to enhance node classification performance under distribution shifts by leveraging graph topology. Unlike invariant learning approaches that rely on strict assumptions about environment labels, TAR applies a dynamic sample reweighting strategy that incorporates gradient flow along graph edges to adjust sample importance during training. The framework employs a minimax optimization process: the inner maximization learns probability densities for samples, while the outer minimization trains the model on a dynamically weighted distribution. Theoretical analysis demonstrates that TAR identifies the local worst-case distribution, improving robustness against unseen distributions. Experimental evaluations on multiple OOD node classification benchmarks show that TAR consistently outperforms existing methods, particularly in settings where domain information is unavailable. Claims And Evidence: TAR’s theoretical foundation ensures robustness and convergence:While the theory is sound, additional empirical ablations (e.g., impact of different loss functions) could strengthen the claim. Methods And Evaluation Criteria: The evaluation methodology is generally appropriate for the problem. Theoretical Claims: Yes Experimental Designs Or Analyses: The experiments are well-designed. Supplementary Material: The paper does not seem to provide substantial additional insights in supplementary material. Relation To Broader Scientific Literature: The paper is well-positioned within the OOD generalization and graph representation learning literature. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. Unlike standard sample reweighting methods that treat training instances independently, TAR explicitly considers graph topology during reweighting. The use of gradient flow along graph edges ensures that sample weights are influenced by the structural context of nodes, making the approach more aligned with the underlying data distribution. This is particularly important in graphs where neighboring nodes share similar properties. 2. The inner maximization problem dynamically adjusts sample weights by optimizing against a local worst-case distribution, while the outer minimization problem trains the model using these reweighted samples. This approach is inspired by distributionally robust optimization (DRO) but extends it to a graph-based setting with topology-aware constraints. The theoretical justification for this method strengthens its credibility. 3. The paper provides rigorous theoretical analysis that supports the proposed method. These results ensure that the dynamic reweighting mechanism is theoretically grounded, rather than being an empirical heuristic. Weaknesses: 1. While TAR improves robustness under distribution shifts, it is unclear how much performance degradation occurs on in-distribution (ID) data. Many robust optimization methods sacrifice ID accuracy to gain robustness, but the paper does not explicitly analyze this trade-off. A controlled study where ID and OOD accuracy are jointly evaluated would provide deeper insights into this aspect. 2. TAR’s effectiveness heavily relies on gradient flow over graph edges. However, in sparse graphs with low connectivity, the amount of information that can be propagated during sample reweighting may be severely limited. Conversely, in densely connected graphs, the method might smooth weights too aggressively. The paper does not systematically analyze how TAR performs across different graph sparsity levels or degree distributions, which could affect its generalizability. 3. The method optimizes a weighted loss function using cross-entropy loss, but it is unclear how TAR would perform under alternative loss functions (e.g., margin-based losses, contrastive losses). Some OOD methods are highly sensitive to loss function choice, and the paper does not explore whether TAR is robust to such variations. Other Comments Or Suggestions: 1. How does TAR compare to simpler DRO methods in terms of training time? 2. How does TAR perform on heterophilic graphs, where homophily assumptions do not hold? Questions For Authors: Regarding the questions, please refer to the weaknesses section in the strengths and weaknesses part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your careful review and insightful feedback. Below we provide point-by-point responses to address all raised concerns. # Performance when there is no shift during testing Thank you for your insightful suggestions! We compare TAR with several typical robust optimization methods, and the results are presented in the table below. Our key observations are as follows: - On the CBAS and Twitch datasets, TAR consistently outperforms baselines in both IID (no shift) and OOD (covariate shift) settings. Notably, the performance improvements under OOD are significantly more pronounced than those under IID. - Some methods, such as VREx, while achieving notable gains on CBAS and Twitch under the OOD setting, exhibit substantial performance degradation in the IID setting. - On the Cora dataset, TAR shows a slight, statistically insignificant drop in performance under IID. However, it achieves the best performance under the OOD setting, surpassing all other methods. These results highlight TAR's robustness and effectiveness across different scenarios. |dataset|CBAS|CBAS|Twitch|Twitch|Cora|Cora| |-|-|-|-|-|-|-| |shift|IID|OOD|IID|OOD|IID|OOD| |ERM|86.00±1.20|78.29±3.10|72.83±1.23|50.04±2.53|69.65±0.55|64.72±0.54| |VREx|83.14±3.26|78.57±2.02|70.84±4.15|51.26±4.82|69.63±0.31|65.02±0.45| |KLDRO|86.00±1.86|77.71±1.63|73.35±0.63|53.76±4.19|69.64±0.62|64.85±0.51| |GroupDRO|86.29±1.28|77.14±2.67|71.54±3.34|50.48±2.04|**69.77±0.26**|64.95±0.59| |TAR|**90.29±2.93**|**87.43±5.09**|**73.73±0.44**|**57.82±2.13**|69.53±0.77|**65.64±0.37**| # Performance of different graph sparsity levels Thank you for your question! As shown in the table below, the five benchmark datasets we employed exhibit diverse degree distributions, with average node degrees ranging from 1.52 to 26.15. Our method demonstrates consistently strong performance across all these varying degree distributions as you can see in Tables 1 and 2 in our paper. |dataset|webkb|cbas|twitch|cora|arxiv| |-|-|-|-|-|-| |avg. degree|1.52|5.66|26.15|6.41|13.67| # Performance on heterophilic graphs |dataset|webkb|cbas|twitch|cora|arxiv| |-|-|-|-|-|-| |avg. node homophilic|**0.14**|0.60|0.64|0.59|0.64| Thank you for your insightful question! As shown in the table above, our evaluation covers datasets with varying levels of homophily. Notably, WebKB represents a standard heterophilic graph benchmark. We additionally contruct an another OOD dataset chameleon, following the same priciple of GOOD benchmark [1] and using node degree as split type. As demonstrated in the results table below, TAR consistently achieves top performance on both heterophilic graphs, further validating its robustness across different homophily regimes. |dataset|WebKB|WebKB|Chameleon|Chameleon| |-|-|-|-|-| |shift|concept|covariate|concept|covariate| |ERM|27.16±0.93|19.37±6.01|34.65±2.56|33.13±4.53| |VRex| 26.61±1.42|36.83±1.99|34.81±2.51|**36.88±3.43**| |CIT|28.99±2.11|28.89±9.09|36.76±3.43|35.11±2.35| |GroupDRO|29.17±1.76|25.24±9.57|34.97±0.95|36.15±5.19| |TAR|**30.83±1.90**|**37.46±4.84**|**38.34±1.48**|**36.88±2.66**| # Performance on contrastive loss Thank you for your insightful question! TAR can be applied to different loss type. We apply TAR to two graph contrastive learning methods——GRACE [2] and COSTA [3]——to verify its effectiveness on contrastive loss. Specifically, we apply TAR in the pretrain stage and keep the finetune stage the same as the raw contrastive learning method. As shown in the table below, TAR consistenly improves the performance of different graph contrastive learning under distribution shifts. |dataset|WebKB|WebKB|CBAS|CBAS| |-|-|-|-|-| |shift|concept|covariate|concept|covariate| |ERM|27.16±0.93|19.37±6.01|82.43±2.56|78.29±3.10| |GRACE|32.66±5.79|19.68±9.73|90.29±1.64|93.71±1.28| |+ TAR|36.33±5.79|26.51±3.00|91.00±0.64|94.57±2.56| |COSTA|32.11±4.00|16.03±5.09|82.71±1.63|76.29±3.99| |+ TAR|33.39±3.28|19.68±5.68|85.29±2.45|79.14±3.73|| [1] S. Gui, et al., Good: Agraph out-of-distribution benchmark. Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. [2] Y. Zhu, et al. Deep Graph Contrastive Representation Learning. ICML Workshop on Graph Representation Learning and Beyond, 2020. [3] Y. Zhang, et al. Costa: Covariance-preserving feature augmentation for graph contrastive learning. SIGKDD 2022. # Training Cost Analysis We compared the time cost (epoch/s) of our method with other methods, as shown in the table below. It can be observed that our method, similar to Group DRO and KLDRO, introduces only little overhead, demonstrating **excellent scalability**. For a comprehensive complexity analysis, please see Appendix F of our paper. |Methods|ERM|KLDRO|GroupDRO|CIT|TAR(ours)| |-|-|-|-|-|-| |CBAS (4k edges)|10.1±0.2|10.2±0.7|10.3±1.2|15.0±4.3|12.0±2.2| |Arxiv (1.2M edges)|181.8±1.0|186.9±1.1|232.9±1.4|OOM|235.1±3.1| Should you have any additional concerns, please do not hesitate to let us know. --- Rebuttal Comment 1.1: Comment: I confirm that I have read the author's response to the question I raised, and I still keep my score.
Summary: This paper proposes a Topology-Aware Dynamic Reweighting (TAR) framework to address distribution shifts in node classification tasks using Graph Neural Networks (GNNs). Addressing the limitations of existing invariant learning methods (which rely on strong invariance assumptions) and sample reweighting approaches (which ignore graph topology), TAR dynamically adjusts node weights through gradient flows along graph edges while incorporating topological constraints via discrete geometric Wasserstein distance. The framework employs a minimax optimization strategy: the inner maximization identifies local worst-case distributions through constrained gradient flows, while the outer minimization trains the GNN model on these adversarially weighted samples to enhance distributional robustness. Theoretical analysis proves that TAR achieves exponential convergence rates and provides formal distributional robustness guarantees. Claims And Evidence: The claims in the submission are ​largely supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes Theoretical Claims: The core theoretical claims are ​mathematically consistent within the stated assumptions. Experimental Designs Or Analyses: Yes Supplementary Material: Yes, I review the part of Implementation Details and Comparion on SOTA Graph Transformer. Relation To Broader Scientific Literature: It offers a new perspective for graph node classification under distribution shifts. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. Combines ​discrete geometric Wasserstein constraints with entropy regularization, ensuring smoothness. 2. Provides formal guarantees for ​distributional robustness and ​exponential convergence rates. 3. Demonstrates ​consistent improvements over most baselines across diverse datasets. 4. Maintains computational efficiency via message-passing mechanisms, with minimal overhead compared to ERM training. Weaknesses: 1. Limited Analysis of Graph Extrapolation (GE): (1) Compare GE to other augmentation strategies (2) Analyze how GE interacts with TAR’s reweighting mechanism. 2. While TAR is efficient for moderate-sized graphs, the ​gradient flow iterations may scale poorly for graphs with billions of edges. 3. The bi-level optimization of min and max may lead to high training costs and instability. Other Comments Or Suggestions: 1. Please add the analysis of graph extrapolation. 2. It is suggested that the authors supplement the performance of the method in this paper when there is no covariance shift or concept shift during testing. 3. It is suggested that the authors provide sensitivity analyses for the following parameters: the coefficient of the entropy term, the TAR inner learning rate, the graph extrapolation ratio, and the coefficient of the topology penalty. Questions For Authors: 1. How is the sensitivity of other parameters? 2. Why does this paper additionally consider concept shift? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your careful review and insightful feedback. Below we provide point-by-point responses to address all raised concerns. # Analysis of Graph Extrapolation ## 1.Interaction between GE and Reweighting Graph Extrapolation (GE) is a crucial component of TAR. Without GE, TAR reweighting can only simulate potential worst-case distributions by adjusting sample weights of existing data, which is limited by the coverage of current graph. In contrast, GE expands the potential distribution by transforming the graph. Specifically, while OOD samples may not appear during training, GE may construct such samples, allowing TAR reweighting to assign them higher weights and simulate potential distribution shifts. ## 2.Comparison with Mixup We use both Drop Edge and Mask Feature as GE since they transform the graph's topology and node features, respectively. These techniques are also utilized in other graph OOD methods like FLOOD. For comparison, we replace GE with Mixup, which only interpolates node features. The results show that GE consistently outperforms Mixup. |dataset|CBAS|Twitch|WebKB| |-|-|-|-| |Mixup|74.86±1.63|52.23±1.21|31.59±10.53| |TAR+mixup|78.00±2.78|54.51±1.30|36.51±8.57| |TAR+GE (ours)|**87.43±5.09**|**57.82±2.13**|**37.46±4.84**| # Performance when there is no shift during testing Thank you for your suggestion! Below are the comparative results and key observations: - On CBAS and Twitch datasets, TAR consistently outperforms baselines in both IID (no shift) and OOD (covariate shift) settings, with significantly larger gains under covariate shift. - On Cora dataset, TAR shows a slight, statistically insignificant drop in IID performance but achieves the best results under OOD, surpassing other methods. |dataset|CBAS|CBAS|Twitch|Twitch|Cora|Cora| |-|-|-|-|-|-|-| |shift|IID|OOD|IID|OOD|IID|OOD| |ERM|86.00±1.20|78.29±3.10|72.83±1.23|50.04±2.53|69.65±0.55|64.72±0.54| |VREx|83.14±3.26|78.57±2.02|70.84±4.15|51.26±4.82|69.63±0.31|65.02±0.45| |GroupDRO|86.29±1.28|77.14±2.67|71.54±3.34|50.48±2.04|**69.77±0.26**|64.95±0.59| |TAR|**90.29±2.93**|**87.43±5.09**|**73.73±0.44**|**57.82±2.13**|69.53±0.77|**65.64±0.37**| # Sensitivity analysis of other parameters Thank you for your insightful suggestion! We want to clrify that "the coefficient of the topology penalty" is not a hyperparameter, we set it to $\frac{1}{2\tau}$ in our optimization (page 3, line 163), and $\tau$ is exactly the TAR inner learning rate (page 4, equation 5). Due to space constraints, additional analysis figures for other parameters are provided in this anonymous [link](https://anonymous.4open.science/r/ICML-rebuttal-16C3). Please refer to it for more details. # Training Cost Analysis Compared to empirical risk minimization (ERM), TAR introduces minimal overhead. Our paper (Appendix F) includes complexity analysis, and we provide additional details here. For an $l$-layer, $d$-dimensional GCN on a graph with $n$ nodes and $m$ edges, the time cost of TAR includes: - **Outer Training**: $O(ld^2n + lm)$, which encompasses feature transformation and neighbor aggregation. - **Inner Loop**: The $k$ iterations have a time complexity of $O(kn + km)$. Real-world graphs are typically sparse, with small node degrees and larger GNN dimensions (e.g., $d=300$, max average degree $m/n=26$ in Twitch). Thus, $ld^2n \gg lm$, and since $k \ll ld^2$, TAR's additional computation is negligible. Besides, the inner loop operates in a message-passing style and is parameter-free, ensuring scalability for large-scale graphs. The table below compares TAR's runtime (epoch/s) with other methods, showing it introduces slight overhead over ERM, demonstrating **excellent scalability**. |Methods|ERM|KLDRO|GroupDRO|VREx|CIT|TAR(ours)| |-|-|-|-|-|-|-| |CBAS (4k edges)|10.1±0.2|10.2±0.7|10.3±1.2|10.8±0.2|15.0±4.3|12.0±2.2| |Arxiv (1.2M edges)|181.8±1.0|186.9±1.1|232.9±1.4|231.8±0.8|OOM|235.1±3.1| > Q: Why does this paper additionally consider concept shift? We would like to clarify that we follow the terminology used in GOOD benchmark, where distribution shifts are categorized into concept shift and covariate shift, with datasets specifically constructed for these cases. Our experiments strictly follow this benchmark setting, and results show that our method effectively enhances performance across both types of shifts. Thank you for your question, and we will incorporate this into our final version to avoid any misunderstandings. > Q: The bi-level optimization of min and max may lead to high training costs and instability. For TAR's training cost, we refer the reviewer to the "Training Cost Analysis" above. Regarding instability concerns, we provide additional loss visualizations at this [link](https://anonymous.4open.science/r/ICML-rebuttal-16C3). These results demonstrate that TAR achieves a smaller generalization loss without significant training instability. Should you have any additional concerns, please do not hesitate to let us know.
Summary: This paper proposes TAR, a framework which dynamically weights and reweights nodes within a Graph-Neural-Network (GNN) given the “risk”-level of nodes, incorporating topological structural information and providing robustness against shifts of distribution. Claims And Evidence: Claims are backed by theoretical proofs when made. Methods And Evaluation Criteria: Five classification datasets commonly used across OOD problems were used for validation, with a range of graph size and class/feature space size. Distribution shifts were defined. Baseline and state-of the art methods were compared against the newly proposed TAR framework, with TAR maximising the central tendency of test-accuracy without compromising the variance. Theoretical Claims: Proofs were concise and used where relevant. Experimental Designs Or Analyses: The experimental design is well suited to the problem posed. Supplementary Material: The supplementary material describes in more detail nomenclature on distribution shifts. It discusses the datasets and hyper-parameters of the implementation. A complexity analysis of time analysis under big-O notation is detailed. Relation To Broader Scientific Literature: This paper builds on work in the field of OOD generalisation, in particular providing theoretical proofs alongside empirical validation for distributional robustness. Essential References Not Discussed: I am not aware of any essential missing references. Other Strengths And Weaknesses: This paper is concise where necessary and well-explains the TAR framework without unnecessary focus on preliminaries. In particular the backing of empirical work with theoretical proofs for understanding time complexity and limitations of the approach is commended. Other Comments Or Suggestions: Graph extrapolation could have been mentioned earlier in the paper. It is used in Figure 1, but the requirements/comments are not expanded upon until section 3.3. Page 5 Line 263 – unclear grammar used. “…which disrupts the connectivity and hinders the calculation of this Equation, we intuitively…”. Page 13 Line 665 – Spelling error. “entorpy term”. Questions For Authors: No questions, thank you for the paper Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for the insights in the evaluation and the hints for revising manuscripts. > Graph extrapolation could have been mentioned earlier in the paper. Thank you for your suggestion. We will add a summary in the methodology section to introduce GE earlier. Once again, thank you for your careful review. We will thoroughly check our manuscript to avoid any grammar or spelling errors!
null
null
null
null
null
null
null
null
Preference Controllable Reinforcement Learning with Advanced Multi-Objective Optimization
Accept (poster)
Summary: The paper introduces a novel framework, Preference Controllable Reinforcement Learning (PCRL), which trains a single, preference-conditioned policy capable of generating Pareto optimal solutions according to user-specified trade-offs. The approach leverages advanced multi-objective optimization (MOO) techniques and proposes a new update method, PreCo, that incorporates a similarity function to align the policy’s performance with a given preference. The paper supports its claims with comprehensive theoretical analysis—providing convergence guarantees—and demonstrates the method’s effectiveness through experiments in environments such as Fruit-Tree and MO-Reacher, showing improved hypervolume and cosine similarity metrics compared to traditional linear scalarization (LS) and several existing MOO methods. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The theoretical analysis, while rigorous, makes some idealized assumptions that may not fully capture the noise and uncertainties present in real-world scenarios, thereby leaving questions about the practical applicability of the method. Experimental Designs Or Analyses: There is a lack of ablation studies or sensitivity analysis regarding key hyperparameters—such as the similarity weight $\lambda$ in the PreCo update—which raises concerns about the robustness and practical tuning of the approach. Supplementary Material: Yes. Relation To Broader Scientific Literature: The proposed method represents an important contribution to the field of multi-objective reinforcement learning. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The PCRL framework addresses the well-known limitation of LS methods by enabling more extensive exploration of the Pareto front and precise control over policy outputs in non-strictly convex settings. 2. The proposed PreCo update method is innovative; by integrating a similarity gradient to balance conflicting multi-objective gradients, it provides a clear theoretical advancement, as evidenced by detailed convergence proofs (e.g., Theorems 4.1–4.4). 3. The experimental evaluation is extensive, covering both discrete action environments (Fruit-Tree) and robotic control tasks (MO-Reacher), with state coverage visualizations that effectively illustrate how the method produces diverse, preference-specific policies. 4. The authors also consider computational efficiency by solving the min-norm problem at the policy level rather than at the parameter level, which is beneficial for scalability in large models. Weaknesses: 1. The set of baseline methods used in the experiments is somewhat limited; some of the compared methods are relatively dated, which narrows the scope of the comparative analysis. 2. Experiments are primarily conducted in discrete action spaces. Although the authors mention potential applications to continuous control and real robots, there is insufficient experimental evidence to support the method’s effectiveness in these more challenging settings. Other Comments Or Suggestions: No Questions For Authors: 1. Can the authors provide ablation studies on the similarity weight $\lambda$ to illustrate its impact on both convergence speed and final control performance? 2. For continuous action spaces, does the PreCo update method require any modifications? Could the authors elaborate on potential challenges and necessary adjustments for continuous control environments? 3. Although the paper suggests that the method could be applied to real robotic tasks, are there any plans for experimental validation on actual robotic platforms? What practical issues might arise during such a transition? 4. The current experiments involve up to six objectives. How does the proposed method scale when the number of objectives increases further, and are there anticipated bottlenecks in terms of computational complexity or performance? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer, We sincerely thank you for your valuable review and constructive feedback. We address your concerns and aswer your questions below: --- > Q1. Can the authors provide ablation studies on the similarity weight $\lambda$ to illustrate its impact on both convergence speed and final control performance? A1: The table below demonstrates the robustness of our method to different values of $\lambda$: |$\lambda$ |5|10|50|100|300|550| |-|-|-|-|-|-|-| |Hyper Volume (*1e3)|13.83$\pm$1.74|13.98$\pm$2.11|14.74$\pm$1.36|15.02$\pm$1.7|15.21$\pm$0.79|15.61$\pm$0.75| |Cosine Similarity|0.76$\pm$0.03|0.76$\pm$0.04|0.77$\pm$0.02|0.77$\pm$0.03|0.79$\pm$0.02|0.78$\pm$0.03| This table illustrates how different upper bounds of $\lambda$ affect PreCo's performance in the FruitTree environment. While smaller values of $\lambda$ may slightly change the performance, PreCo remains consistently superior to existing MORL methods. Notably, the best-performing baseline, PDMORL, achieves a hypervolume of only **9.3$\pm$0.08*1e3**, demonstrating PreCo's promising advantage. --- > Q2. For continuous action spaces, does the PreCo update method require any modifications? Could the authors elaborate on potential challenges and necessary adjustments for continuous control environments? A2. This has been discussed in Appendix E, with experiments in continuous action space presented in Appendix C. In continuous action spaces, the "policy-level" gradient corresponds to the gradients of (1) action samples (Eq. 26) or (2) the parameters of the Gaussian action distribution (Eq. 24), rather than the gradients of action probabilities. As illustrated in Figure 9 and Algorithm 2, our approach first determines a common descent direction for all objectives by solving the min-norm problem (6) using their gradients (Eq. 24 or 26). The policy model is then updated to fit the new actions along this direction. The experiments in Appendix C focus on high-dimensional continuous control tasks with simple, strictly convex objectives. While the advantage on performance is smaller compared to tasks with non-strictly convex objectives and many objectives, our method still outperforms previous MORL approaches. --- > Q3. Although the paper suggests that the method could be applied to real robotic tasks, are there any plans for experimental validation on actual robotic platforms? What practical issues might arise during such a transition? A3: Real-world applications, such as robotics, are indeed an exciting direction. In fact, we have already been applying our method to large language models and acheived promising results. One of our future plans is to leverage preference-controllable language models for high-level robotic planning and human-robot interaction. For low-level robotic control, several challenges include common sim-to-real issues such as sample efficiency, dynamics mismatch, and safety concerns. One critical study is to ensure that the robot does not exhibit unpredictable behavior when conditioned on previously unseen preferences. --- > Q4. The current experiments involve up to six objectives. How does the proposed method scale when the number of objectives increases further, and are there anticipated bottlenecks in terms of computational complexity or performance? The MOO algorithms with conflict-avoidance ability, including our proposed PreCo, require solving the min-norm problem (6) to determine a weight vector $w$, whose dimension corresponds to the number of objectives. When the number of objectives becomes very large, the computational cost of solving (6) can increase significantly. To mitigate this, instead of explicitly solving (6) at every training step (Algorithm 2), we can update $w$ using the gradient of (6) once per training step (Algorithm 1). Our theoretical results justify the convergence of this novel approach, making it a more scalable alternative in practice. --- We hope these clarifications address your concerns. Let us know if you have any further questions. --- Rebuttal Comment 1.1: Comment: Thanks for the response that addresses my concerns --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your thoughtful feedback and are glad that our response addressed your concerns. If our clarifications have resolved your doubts, we would be grateful if you could reconsider your rating. Your comments helped us to strengthen our empirical results, and your support for our work could further drive advancements in this research direction. Thank you for your time and support!
Summary: The paper proposes a novel approach to learning the ϵ-Pareto efficient frontier in multi-objective optimization using standard reinforcement learning algorithms, specifically TD3 and PPO. The authors introduce a method where preferences are sampled uniformly, and a similarity function between the preference and value function is used as feedback to guide the RL algorithm. Theoretical guarantees are provided regarding convergence to ϵ-optimality. Empirical results are presented across multiple environments to demonstrate the efficacy of the proposed approach. The paper also discusses the potential application of the method in real-world scenarios, particularly in multi-agent reinforcement learning settings. Claims And Evidence: The claims made in the paper are generally supported by empirical evidence and theoretical analysis. The authors provide convergence guarantees for their method, which are backed by theoretical proofs. The empirical results are extensive and demonstrate the effectiveness of the proposed approach across various environments. However, some claims could benefit from further clarification or additional evidence: The claim that the method achieves ϵ-Pareto efficiency is supported by theoretical analysis, but the empirical results (e.g., Figure 4b and 4d) do not clearly show that the solutions lie on the Pareto front. This discrepancy should be addressed. The justification for the choice of similarity function in Eq. (8) is not thoroughly explained. A more detailed discussion or comparison with alternative similarity functions would strengthen this claim. Methods And Evaluation Criteria: The proposed methods are appropriate for the problem of learning the ϵ-Pareto efficient frontier in multi-objective optimization. The use of TD3 and PPO as base algorithms is reasonable, given their widespread success in RL. The evaluation criteria, such as hypervolume optimization, are standard in multi-objective optimization literature and align well with the goals of the paper. However, the following points could be improved: The paper could benefit from a more detailed discussion of why hypervolume was chosen as the primary metric and how it relates to the ϵ-Pareto efficiency goal. The uniform sampling of preferences is a simplifying assumption. The authors should discuss how non-uniform preference sampling might affect the results and whether the method can be adapted to handle such cases. Theoretical Claims: The theoretical claims regarding convergence to ϵ-optimality are a key contribution of the paper. The proofs appear to be correct, but the theoretical analysis could be deepened. For instance: The learning rates for the RL algorithms are mentioned, but a more detailed discussion of their impact on convergence would be valuable. The paper could explore the theoretical implications of non-transient preferences or non-linear preference transformations, as these scenarios are common in real-world applications. Experimental Designs Or Analyses: The experimental design is sound, with results presented across multiple environments to demonstrate the robustness of the proposed method. However, there are a few areas for improvement: In Figure 4b and 4d, the solutions do not clearly lie on the Pareto front, which raises questions about the empirical validity of the ϵ-Pareto efficiency claim. The authors should address this discrepancy. The experiments focus on synthetic environments. Including real-world benchmarks, such as economic settings (e.g., profit vs. customer satisfaction), would strengthen the practical relevance of the paper. Supplementary Material: The supplementary material was reviewed and provides additional details on the theoretical proofs and experimental setups. However, it could be expanded to include additional empirical results, particularly for non-uniform preference sampling or non-transient preferences. Relation To Broader Scientific Literature: The paper builds on prior work in multi-objective optimization and reinforcement learning. The key contribution—learning the ϵ-Pareto efficient frontier using RL—is novel and addresses an important gap in the literature. However, the paper could better situate itself within the broader context by discussing how the proposed method compares to other approaches for Pareto front learning, such as evolutionary algorithms or gradient-based methods. Essential References Not Discussed: The paper could benefit from citing and discussing the following works: Recent advances in non-linear preference transformations for multi-objective optimization. Methods for handling non-uniform preference sampling in RL. Applications of Pareto optimization in economic settings, which would provide a more nuanced benchmark for the proposed method. Other Strengths And Weaknesses: Strengths: The paper is well-written and addresses an important problem in multi-agent reinforcement learning. The empirical results are extensive and demonstrate the potential of the proposed method in applied settings. The theoretical guarantees provide a solid foundation for the proposed approach. Weaknesses: The delta from previous RL algorithms is not clearly articulated. The reliance on TD3 and PPO without significant modification raises questions about the novelty of the method. The empirical results do not consistently demonstrate ϵ-Pareto efficiency, particularly in Figure 4b and 4d. The paper could benefit from a more thorough discussion of the broader implications and limitations of the proposed method. Other Comments Or Suggestions: Move the discussion of related works to the introduction for better flow. Clarify whether the objectives are truly "conflicting" or simply involve trade-offs (Line 138). Move Algorithm 2 to the main paper, as it represents the actual algorithm used, while Algorithm 1 is more illustrative. Questions For Authors: Justification for Similarity Function (Eq. 8): What justifies the use of this specific similarity function over alternatives? How sensitive are the results to the choice of similarity function? Non-Uniform Preference Sampling: How would non-uniform sampling of preferences affect the learned policy? Can the method be adapted to handle such cases? Non-Transient Preferences: Can the method extend to non-transient preferences? Is consistency in preferences a requirement for convergence? Economic Benchmarks: Have the authors considered benchmarking the method on economic settings, such as maximizing profit versus customer satisfaction? This would provide a more nuanced evaluation of the method's applicability. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer, We sincerely thank you for your valuable review and constructive feedback. We address your concerns and aswer your questions below: --- >**C1: About Figure 4b and 4d:** The empirical results do not consistently ... in Figure 4b and 4d. **R1:** Figures 4b and 4d are **2D projections of the 3D results** presented in Figure 4a. While some points may appear dominated in the 2D projections, all points in the figures are actually non-dominated when all three objectives are taken into account. We hope this clarification helps address your concern regarding the empirical results. --- >**C2:** The delta from previous RL algorithms is not clearly ... the novelty of the method. **R2:** TD3 and PPO are representative single-objective RL algorithms used as lower-level backbones in our approach. More details on how they are integrated can be found in Appendix E. The key novelties of our work are as follows: 1. **Conceptual:** We propose a preference control framework (PCRL) that overcomes the limitations of traditional linear scalarization methods, which can only discover a limited set of Pareto-efficient solutions. 2. **Methodological:** We integrate modern MOO algorithms into MORL, enabling better handling of conflicting and stochastic gradients—challenges that previous MORL methods largely overlooked. Additionally, we design a novel PreCo algorithm to leverage these strengths while ensuring preference alignment. 3. **Technical:** Beyond theoretical analyses, we propose a memory-efficient approach for policy-level computation (see Figure 9, Algorithm 2, and Appendix E), making our method scalable for large models used in practical applications. --- >Q1: Justification for Similarity Function (Eq. 8): ... choice of similarity function? **A1:** PreCo's theoretical properties require the similarity function to: 1. Be **lipschitz smooth**. 2. have a gradient $g_s$ must be **positive linear combination** of objective gradients $g_{1:m}$ ($m$ is the number of objectives). The similarity defined in Definition 4.1 satisfies these conditions. Using a different similarity function could disrupt the convergence guarantees of our algorithm. In practice, conventional cosine similarity could work in some cases, but it lacks theoretical guarantees. --- >Q2: Non-Uniform Preference Sampling: ... handle such cases? **A2:** Non-uniform preference sampling may cause certain preferences to be learned better than others or lead to overfitting. The impact depends on the model's generalizability, so training with a uniform distribution is preferable without prior knowledge of test preferences. However, as noted in Appendix A.3, a progressive curriculum for preference distributions could accelerate learning. For instance, the agent could start with a small, diverse set of preferences and gradually expand to neighboring ones, progressively uncovering the full Pareto front. Further exploration of this is an interesting direction for future work. --- >Q3: Non-Transient Preferences: ... convergence? **A3:** If we understand correctly, non-transient preferences refer to preferences that are consistently sampled during training. While this could lead to overfitting to specific preferences, a well-designed curriculum that gradually introduces new preference distributions can mitigate this issue and promote exploration of the full Pareto front (as discussed in Appendix A2). In fact, the training scheme in Algorithm 2 (Appendix E.1) can be viewed as a form of meta-learning, where the outer loop samples different preferences and the inner loop optimizes for a specific sampled preference. While convergence is always guaranteed within the inner loop, non-stationary preference distributions may affect the overall meta-learning process, potentially leading to overfitting. However, a carefully designed curriculum with progressively shifting preference distributions, even if non-stationary, can accelerate learning and improve the agent’s performance. Theoretical analysis of such a curriculum would be an interesting direction for future work. --- >Q4: Economic Benchmarks: ... the method's applicability. **A4:** Economic benchmarks are indeed an interesting direction for future work. We will add more discussion on economic applications in the related work section. A key technical contribution of our method is to solve the min-norm problem (6) at the policy level (details in Figure 9 and Appendix E), removing the need to store multiple parameter gradients for different objectives. This results in minimal **additional memory cost**, making it highly suitable for training large-scale practical models. In fact, we are already applying our method to multi-objective preference learning for large language models with promising results, suggesting its potential applicability to practical economic settings as well. --- Thanks again, and please let us know if there are further questions.
Summary: This paper proposes PCRL for preference control of multi-objective trade-offs, incorporating recent MOO algorithms into MORL. Convergence analysis is provided to show the approach can learn preference-specific Pareto optimal solutions and handles tochastic gradients as well. Claims And Evidence: Yes, the paper is showing evidence in both theoretical proofs and simulation results. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, the Theorems in main paper are making sense, though I have not checked the full proof. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, the simulation setup and part of the proof. Relation To Broader Scientific Literature: This paper proposes a new technique to scale up reinforcement learning algorithms to multi-objective setting. Essential References Not Discussed: No. Other Strengths And Weaknesses: The paper proposes a novel approach based on multi-objective optimization. The method is clearly described, and the simulation results especially on higher number of objectives and conflicting objectives validate the model design. However, it is unclear about the computation complexity. And the method seems to consume more iterations and need to keep large volume of gradient information. It is unclear how the stepsize choice for each preference would affect the performance. Moreover, the methods seems to be hard to transfer to new preference vector, as the algorithm needs to re-implement the whole optimization again. In addition, more explanations about MGDA are required, as it is not explained in this paper. Other Comments Or Suggestions: Please see below for questions. Questions For Authors: The Theorem 4.2's assumption is that g' is a convex combination of g_i. Is this practical for practical MORL setting? What does Multi-Objective Optimization(MOO) algorithm exactly refers to? Are there any other choices other than the MOO mentioned in this work? "However, the gradients of cosine similarity and the original objectives can conflict, as it does not leverage conflict-avoidance techniques from MOO algorithms." Are there any evidence showing this claim, rather than just looking at the final performance? Can the authors comment on the computation complexity of implementing Jacobian matrix and similarity gradient, especially compared to plain policy gradient algorithms? ### Update after rebuttal Thanks for the authors' efforts in rebuttal. My evaluation has been updated. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer, We sincerely thank you for your time and valuable review. --- # Response to your questions Below are our answers to your questions > Q1. About Theorem 4.2's assumption **A1:** Theorem 4.2 considers the case where $\lambda$ increases indefinitely. **In practice, Theorem 4.1 already guarantees PreCo’s convergence as long as $\lambda$ has an upper bound**, as noted in Remark 4.1. In Appendix F.2, We provided some hints for constructing such a similarity function $\Psi'(p,\cdot)$ as follows: * When $v$ gets close to preference $p$: Its similarity gradient $\nabla_v\Psi'(p,v)$ should approach the convex preference weight $p$; * When $v$ is NOT close to preference $p$: $\nabla_v\Psi'(p,v)$ should be close to the convex coefficient $\nabla_v\Psi(p,v)/\|\nabla_v\Psi(p,v)\|_1$, where $\Psi$ is our original similarity defined in Definition 4.1. > Q2. About MOO and choices other than the MOO **A2:** MOO refers to the algorithms specificly designed for deep learning with multiple objectives, such as MGDA[1], CAGrad[2], PCGrad[3], SDMGrad[4]. They are known to find a conflict avoidance direction for common improvements across all objectives. Additionally, evolutionary algorithms[5] have been developed for multi-objective problems, but they tend to be less scalable and less sample-efficient compared to gradient-based MOO approaches. > Q3. Evidence for gradient conflict **A3:** For a two-objective case, when the values are $[v_1,v_2]=[10.0,1.5]$ and the preference is $p=[0.9,0.1]$, and let $g_1,g_2$ denote the gradients of objective 1 and 2, respectively. The gradient of cosine similarity is $0.1483 g_1 -0.9889 g_2$, which is nearly opposite to $g_2$, creating a direct conflict with objective 2’s gradient. To resolve this, we need a conflict-avoidance mechanism that optimizes similarity without negatively impacting objective 2. Intuitively, the goal is to find a direction that improves both $v_1,v_2$ while prioritizing $v_1$ to get close to the preference $p$. This can be precisely acheived by our proposed PreCo algorithm. >Q4. Computation complexity of implementing Jacobian matrix and similarity gradient **A4:** Solving the min-norm problem (6) introduces additional computation to determine a common ascent direction, improving sample efficiency at the cost of extra computation. However, **memory consumption remains comparable** to plain policy gradient algorithms: * When solving the problem (6), objective gradients $\nabla_\pi \mathbf{v}^{\pi}$ and similarity gradient $\nabla_\pi \Psi(\mathbf{p},\mathbf{v}^{\pi})$ are stored, but their size are at most batch size $\times$ # objectives, which typically **much smaller than the parameter size**. Once (6) is solved, we obtain a batch-size update direction $d^*$ for $\pi$, allowing $\nabla_\pi \mathbf{v}^{\pi},\nabla_\pi \Psi(\mathbf{p},\mathbf{v}^{\pi})$ to be released. * After solving (6), we compute $\nabla_\theta^T\pi\ d^*$ to update the model parameter. **The jacobian $\nabla_\theta^T\pi$ is not explicitly computed and stored**. Instead, we take the gradient for ${d^*}^T \pi$ to obtain it. In a formulation similar to the policy gradient, it corresponds to $E_{s,a}[\nabla_\theta \log\pi(a|s)\ d^*(s,a)\pi_{old}(a|s)]$, as detailed in Appendix E. Therefore, memory usage at any given time does not exceed that of linear scalarization methods that computes the gradient $\nabla_\theta^T\pi \nabla_\pi \mathbf{p}^T\mathbf{v}^{\pi}$. In the policy gradient form, it is expressed as $E_{s,a}[\nabla_\theta \log\pi(a|s)\ \mathbf{p}^T\mathbf{q}^{\pi}(s,a)]$. --- # Addressing concerns of weeknesses > However, it is unclear about the computation complexity... large volume of gradient information. As mentioned in A4, keeping large volume of gradient information is unnecessary and the memory consumption remains comparable to traditional methods. > It is unclear ... to transfer to new preference vector, as the algorithm needs to re-implement the whole optimization again. As mentioned in lines 143–144 of Section 3 and Algorithm 2 in Appendix E, preferences for training are uniformly sampled, and the stepsize for each preference sample remains constant. This enables a single agent to learn in a "meta-learning" fashion. This enables the agent to generalize to unseen preferences, as demonstrated in our experiments (Section 5 and Appendix D), where we test the agent on preferences it has not encountered during training. --- We hope these clarifications address your concerns. Please let us know if you have any further questions. # Reference [1] Désidéri, MGDA, 2009 [2] Liu et al. Conflict-averse gradient descent for multi-task learning, 2021 [3] Yu et al. Gradient Surgery for Multi-Task Learning, 2020 [4] Xiao et al. Direction-oriented multi-objective learning: Simple and provable stochastic algorithms, 2023 [5] Tan et al. Evolutionary Algorithms for Solving Multi-Objective Problems, 2007 --- Rebuttal Comment 1.1: Comment: The reviewer appreciate the authors' further explanation.
Summary: This paper addresses the limited controllability and coverage of Pareto-optimal solutions in Multi-Objective Reinforcement Learning (MORL), where existing methods based on linear scalarization might struggle to align with user-defined trade-offs and fail to explore the full Pareto front. To overcome these limitations, the authors propose Preference Controllable Reinforcement Learning (PCRL), a framework that trains a preference-conditioned meta-policy to generate solutions based on user-specified preferences. They further introduce PREference COntrol (PreCo), a novel algorithm tailored for PCRL, providing theoretical guarantees on convergence and preference-controlled Pareto-stationary solutions. By integrating advanced Multi-Objective Optimization (MOO) methods, PCRL enhances controllability in MORL and proves particularly effective in environments with highly conflicting objectives, improving the diversity and utility of Pareto-optimal solutions. Claims And Evidence: Regarding the claims made in the paper, below are the positive and the negative parts: **Strengths** - This paper addresses the inherent controllability issue in many existing MORL methods that are meant to only find a Pareto-stationary point for some arbitrary / uncontrollable preference. By using PreCo, one can choose to find the corresponding Pareto-stationary point for a specific preference of interest. - The proposed approach can be viewed as a generalized version of linear scalarization (i.e., linear scalarization is a special case with $\Psi(p,v)=p^\top v$). This is an interesting and reasonable extension. - Motivated by the MOO literature, this paper extends the min-norm problem in the classic MGDA to the case of similarity function for preference control. **Weaknesses** - One main concern is that the proposed algorithm does not really solve the target PCRL problem considered by this paper. Specifically, on page 2, right column, it is mentioned that the goal is to learn a preference-conditional policy to achieve a Pareto-optimal value that maximizes the similarity for any preference $p$. However, based on Theorems 4.1-4.4, it appears that PreCo can only achieve Pareto stationarity (Theorems 4.1-4.2) and near-stationary points of the similarity function (Theorem 4.3-4.4). Hence, the theoretical results do not fully match the claims in the introduction. - Another concern is that I do not see why the conditional policy is needed in PreCo. In Algorithm 1, the (projected) gradient updates (Lines 4-5) are done only for one given preference $p$ (Line 1), and there seems nothing to do with varying preferences. My guess is that there shall be some meta-level method that determines how to switch between different preferences during training, like many other single-policy-network MORL methods. Otherwise, one would need to have a full run of PreCo for **each** individual preference, and this can be extremely sample-inefficient (this issue is also relevant to the issues of the experiments described below). However, this part appears not described at all in the paper. Methods And Evaluation Criteria: - Regarding the methods, one major concern is that technically this paper is more like an MOO paper instead of an MORL work. Based on the formulation and the PreCo algorithm, it appears that the vector-valued objective function $\mathbb{v}$ can be replaced by any objective function and does not necessarily need to be the total expected reward. Indeed, with the unbiasedness assumption (Assumption 4.2) and regularity conditions (Assumptions 4.1 and 4.3), the analysis in this paper completely gets rid of the inherent properties of MORL and thereby can simply follow the standard analysis of the MOO literature. With that said, in my view, this makes the analysis in this paper not that interesting. Moreover, it can even degrade the performance as the inherent properties of MORL are completely ignored. - The choice of the similarity function (Definition 4.1) needs to be further justified. I can understand that $\Psi(p,v)$ is maximized when $p$ and $v$ are parallel. However, it is not clear how the similarity function would contribute to the overall policy update when $p$ and $v$ are far from being parallel, especially if the number of objectives is large. Theoretical Claims: I have checked the proof in Appendix I. I can see that the proofs can go through based on the existing MOO literature, there are some issues to be fixed. Specifically, all the expectations in (51)-(54) need to be conditional. Otherwise, $\pi_{p,t}$ is a random variable and then (51)-(54) would not hold. Similarly, the authors need to check the correctness of Eqs. (56)-(61) and (68)-(70). Experimental Designs Or Analyses: - Evaluation domains: The domains used in this paper appear quite standard in the MORL literature. Both discrete and continuous control tasks in the MO-Gymnasium are considered in the evaluation. - Metrics: Hypervolume (HV) is a fairly standard MORL metric. As this paper focuses on the similarity, the authors also take the cosine similarity (CS) into account. - Performance: PreCo appears to achieve the best HV and CS in both Fruit Tree and the MO MuJoCo tasks. That being said, I do have several concerns: - About the sample efficiency: As mentioned in the “Claims And Evidence,” it remains unclear to me how PreCo handles different preferences during training (either each preference is treated separately or there is indeed some knowledge sharing across preferences in the implementation). Therefore, it is not clear why PreCo can have much better performance than the MORL benchmark methods (like PDMORL and CAPQL). - About the number of environment steps used by each algorithm: Based on the above, one possibility is that different algorithms actually use different numbers of environment steps. If this is the case, then the comparison is actually not fair. With that said, I checked the experimental details in the appendix, but I did not find anything specifically mentioned about the environment steps. Please correct me if I missed anything. - About the missing baselines: There are several missing recent baselines that are known to be strong in finding the Pareto front in the context of MORL, such as Envelope Q-learning (and Envelope DQN) [1], Conditioned Network [2], Q-Pensieve [3], and PCN [4]. [1] Yang et al., “A generalized algorithm for multi-objective reinforcement learning and policy adaptation,” NeurIPS 2019. [2] Abels et al., “Dynamic weights in multi-objective deep reinforcement learning,” ICML 2019. [3] Hung et al., “Q-Pensieve: Boosting sample efficiency of multi-objective RL through memory sharing of Q-snapshots,” ICLR 2023. [4] Reymond et al., “Pareto Conditioned Networks,” AAMAS 2022. Supplementary Material: I have checked the experimental details and the proofs in the appendix. Relation To Broader Scientific Literature: The paper contributes to MORL by addressing the challenge of preference-controllable policy learning, which is not well studies in the existing MORL literature. Prior works, such as LS-based MORL methods, primarily optimize a scalarized objective but fail to capture a diverse set of Pareto-optimal solutions, limiting their ability to align with user preferences. PreCo incorporates preference-awareness into the learning process by conditioning policy updates on user-specified trade-offs. The lack of controllability in existing methods limits their practical applicability, making the research direction of PCRL necessary. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** - The main strength is that the paper points out the inherent controllability issue in many existing MORL methods that are meant to only find a Pareto-stationary point for some arbitrary / uncontrollable preference. By using the proposed PreCo, one can choose to find the corresponding Pareto-stationary point for a specific preference of interest. Moreover, PreCo can be viewed as a generalized version of linear scalarization (i.e., linear scalarization is a special case with $\Psi(p,v)=p^\top v$), and this is a nice extension of the LS-based methods. **Weaknesses** - As mentioned above, the proposed algorithm does not seem to fully solve the PCRL problem. Notably, the goal is to learn a preference-conditional policy to achieve a Pareto-optimal value that maximizes the similarity for any preference $p$. However, based on Theorems 4.1-4.4, it appears that PreCo can only achieve Pareto stationarity (Theorems 4.1-4.2) and near-stationary points of the similarity function (Theorem 4.3-4.4). Hence, the theoretical results do not fully match the claims in the introduction. - The clarity of the proposed method can be improved. Specifically, the use of the conditional policy in PreCo, how the different preferences are handled, and the chosen similarity function need to be further explained and justified. Other Comments Or Suggestions: - Eq. (53)-(54): The notation $C_2$ is overloaded. - Eq. (56): The right bracket at the end shall be moved to the left. Questions For Authors: 1. Based on the theoretical results in Theorems 4.1-4.4, it appears that PreCo can only achieve Pareto stationarity and near-stationary points of the similarity function. Accordingly, the results do not fully match the claims in the introduction. Can the authors comment on this? 2. How is the conditional policy utilized in PreCo (Algorithm 1)? Is there any knowledge sharing across preferences in PreCo? How the different preferences are handled during training? 3. Built on 1., if PreCo needs to handle each preference separately, then how PreCo handles the critical sample efficiency issue in MORL? 4. How many environment steps are used by each algorithm in the experiment? 5. How is the similarity function chosen in PreCo? While I can understand that it is maximized when $p$ and $v$ are fully aligned, it would be good to explain the nice properties of the chosen function in more detail. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer, We sincerely thank you for your time and detailed review. We hope the responses below address your concerns. --- # Answers To Your Questions --- **A1:** In practical deep RL/MORL, first-order gradient-based algorithms are the most widely-used and stationarity is the strongest guarantee [1] they can achieve in practice. Hence, prominent MOO algorithms [2,3,4] establish only Pareto stationarity. Compared to existing works, ours is one of the few having provable convergence under noisy gradients. --- **A2:** We only need to train a single policy to handle different preferences as input conditions. During training, preferences are sampled uniformly without using prior knowledge. These are explained in lines 143–145 of Section 3 and Algorithm 2 in Appendix E. This standard approach in conditional MORL [5,6] ensures shared parameters and knowledge across all preferences. --- **A3:** As noted in A2, preferences are uniformly sampled during training in a meta-learning fashion, applied consistently across all methods. Our high-level objective design makes it a generally applicable framework to all RL backbones (on-policy & off-policy). Its contribution is orthogonal to lower-level improvement of sample efficiency. As discussed in Appendix A.1, when adapted to off-policy methods, it can integrate techniques like Hindsight Experience Replay(HER) to enhance sample efficiency. --- **A4:** In the MuJoCo environments (Ant, Hopper, Reacher), all methods use 3e6 environment steps. For the FruitTree environment, all methods except PDMORL use 3.6e5 steps, while PDMORL follows its original implementation with 1e6 steps. --- **A5:** PreCo's theoretical properties require the similarity function to be: 1. **lipschitz smooth** 2. Its gradient $g_s$ must be **positive linear combination** of objective gradients $g_{1:m}$ The similarity in Definition 4.1 meets these criteria, and its magnitude does not affect PreCo's properties. Intuitively, it encourages certain objectives to improve more, so the achieved trade-offs get closer to the pre-defined preference. See Appendix F.1 and Figure 10 for further insight. --- # Concerns About Methods To address your concerns, we summarize a few key points: **Generalizability:** We focus on a MORL's framework design that is generally applicable to all on & off-policy RL. In contrast, techniques like HER that exploit inherent RL properties are limited to off-policy methods. **Novelty & Contributions:** 1. **Conceptual:** We formulate PCRL for any-preference alignment, overcoming the limitations of prior MORL methods using Linear Scalarization (LS) that has no alignment guarantee. 2. **Methodological:** We integrate modern MOO algorithms into MORL to handle conflicting and stochastic gradients—key aspects previously overlooked. Specifically for PCRL, we design PreCo to inherit these strengths while promoting preference alignment. 3. **Technical:** PreCo’s similarity design requires independent proofs (App. I.1) and significant adaptations (App. I.2-3), as no prior method incorporates such similarity. Moreover, as noted in lines 1670-1672, our analysis is more rigorous than previous literature. **Practicality:** Our assumptions align with standard RL/MORL settings: most RL methods (e.g. policy gradients, DQN loss) are **unbiased** (Ass.2), and their values typically change with a certain level of smoothness (Ass.1). --- # About Theoretical Correctness As noted in Assumption 2, the expectation here is over the gradient noise $\xi$ and Assumptions 1-3 ensures unbiased, bounded gradients for all $\pi_{p,t}$ samples. Thus, the bounds in (51)-(54), (56)-(61), and (68)-(70) apply to all $\pi_{p,t}$ samples and remain valid under expectation over $\pi_{p,t}$. We appreciate the detailed feedback, but treating (51)–(54) as conditional expectations does not impact the proof's correctness. --- # About Experiments **A2-A4** have answered questions about handling different preferences and environmental steps. Here, we address the concerns about the baselines. Envelope Q-learning (EQL), Conditioned Network (CN), and Q-Pensieve (QP) all optimize a linear scalarization (LS) objective and inherit its limitations. The best-performing QP achieves a **6859.94** hypervolume for 6D FruitTree, identical to the best cases of our implemented general LS baseline. This is expected, as QP is simply a more sample-efficient version of LS. Since we already include an LS baseline and CAPQL, which achieves a higher upper bound than LS methods, comparing against EQL, CN, or QP is unnecessary. PCN is a heuristic method relying on model generalizability. Its comparison to other methods is limited. It only accepts a unique input condition, making it less relevant to our study. --- Please let us know if you have further questions. [1] Kushner 1978 Stochastic [2] Sener 2018 MGDA [3] Liu 2021 CAGrad [4] Xiao 2023 SDMGrad [5] Liu 2023 CAPQL [6] Yang 2019 Envelope Q --- Rebuttal Comment 1.1: Comment: Thank the authors for the response. Some of my questions have been nicely addressed in A2-A5. **A1**: Thanks for the clarification. I understand that stationarity is the main convergence property shown for gradient-based methods *in general optimization* (and that’s why MOO algorithms establish only Pareto stationarity). However, that does not necessarily mean that stationarity is the only thing that one can look for in RL. See [1-2] and references therein. The response appears to also echo the original review comment "*..., one major concern is that technically this paper is more like an MOO paper instead of an MORL work. Based on the formulation and the PreCo algorithm, it appears that the vector-valued objective function $\mathbf{v}^{\pi}$ can be replaced by any objective function and does not necessarily need to be the total expected reward. Indeed, with the unbiasedness assumption (Assumption 4.2) and regularity conditions (Assumptions 4.1 and 4.3), the analysis in this paper completely gets rid of the inherent properties of MORL and thereby can simply follow the standard analysis of the MOO literature.*” Please let me know if I missed anything. [1] Bai et al., “Joint Optimization of Concave Scalarized Multi-Objective Reinforcement Learning with Policy Gradient Based Algorithm,” JAIR 2022. [2] Zhou et al., “Anchor-Changing Regularized Natural Policy Gradient for Multi-Objective Reinforcement Learning,” NeurIPS 2022. **Sample efficiency**: Thanks for the clarification on how PreCo handles different preferences. However, my concern on the sample efficiency of PreCo still remains (also stated in the original comments in Experimental Designs Or Analyses). Given that PreCo does not introduce any specific design for improving sample efficiency and also uses the conventional uniform preference sampling, it is not totally clear why PreCo can achieve a higher HV than the benchmark MORL methods (like PD-MORL, CAPQL, and GPI-LS) on both Fruit-Tree and MuJoCo tasks, given that PD-MORL, CAPQL, and GPI-LS are known to be quite strong in terms of sample efficiency. **Experiments**: Thanks for providing the additional results. However, Q-Pensieve (QP) was originally designed for continuous control tasks, but FruitTree is by default a discrete control task. Then, how is QP adapted to FruitTree (e.g., by discretizing the actions)? Accordingly, comparing PreCo with QP directly on continuous control tasks (e.g., MO-Hopper, MO-Ant) would be more fair. ----- Edit ----- Thank the authors for the follow-up response. Most of my concerns have been alleviated to some extent. I have updated my score accordingly. I would encourage the authors to include these additional discussions to improve the clarity of the paper. --- Reply to Comment 1.1.1: Comment: Thanks for your time and the further questions. We try to address your remaining concerns below: --- > ## **Regarding A1** We appreciate your constructive comments and the references ([1][2]). Inspired by your feedback, we now understand your emphasis on more RL-specific properties. Proposition 3 in [3] shows that **in MORL, the Pareto front is convex (but not necessarily strictly convex), ensuring that Pareto stationary points are always Pareto optimal**. We will formalize this in the revisions to strengthen the RL-specific theoretical guarantees. Furthermore, [1] and [2] are not first-order methods but natural policy gradient (quasi-second-order) approaches that: * Are more computationally intensive * Require stricter assumptions, including unbiased fisher information matrix estimation, concavity of scalarization function, lipschitz smoothness, bounded gradients Although, as mentioned, the stationary solutions are optimal for MORL, whether this optimality is achieved does not diminish our contribution, since most existing MORL methods, such as EQL, QP, and GPD-LS, lack global optimality guarantees when applied to continuous spaces using DDPG or SAC. In 'Concerns About Methods,' we tried to clarify the motivation and contribution by formulating MORL as a general MOO problem and leveraging recent MOO advancements. --- > ## **Regarding concerns about sample efficiency** > **Validity of empirical results** Sample efficiency is not the sole factor influencing performance. Our main argument is that the LS objective has inherent limitations, regardless of the sample efficiency of LS methods. As shown in Figure 1(b) or Figure 4.9 of [7], **LS can only discover a limited set of optimal policies when the Pareto front is not strictly convex**. This explains why the best LS method, QP, achieves a similar hypervolume to our LS baseline (without modifications for better sample efficiency), as both are constrained by the LS objective. Our results are reasonable. In the FruitTree environment, with non-strictly convex objectives and low dimensionality, sample efficiency is not the performance bottleneck, leading to a noticeable performance gap between PreCo and LS methods. In contrast, in Mujoco tasks, where objectives are strictly convex and sample efficiency is more crucial, the performance gap is smaller. > **Sample efficiency of proposed approach** MOO algorithms benefit from finding conflict-avoidant directions, which also improves sample efficiency. The toy example in Figure 11 and Appendix G illustrates how MOO algorithms like PreCo, MGDA, and SDMgrad converge for all cases, while GD (Figure 11(e), which optimizes the weighted sum of objectives, the same as LS) converges more slowly in some cases with more conflicting objective gradients. This advantage is general and applies to both on- and off-policy RL backbones with our proposed PCRL. However, sample efficiency techniques like HER[5] for EQL, PD-MORL, QP, and Q-snapshots[6] are exclusive to off-policy Q-value-based methods. --- > ## **Regarding QP Experiments** For discrete control tasks, Q-Pensieve (QP) is implemented by adding Q-snapshots to the Q-envelope method from the MORL benchmark [4]. The greedy action for preference $p$ is given by: $$\arg\max_a \sup_{p',Q'}p^T Q'(s,\cdot;p'),$$ where $Q'$ is the Q-value sampled from the Q-snapshots and $p'$ is the sampled preference used in the Q-envelope operation. Despite these enhancements, QP remains an LS method that aims to maximize the weighted sum of values according to $p$. As discussed,**the LS objective inherently constrains performance, regardless of an algorithm’s optimization efficiency**. Our results align with the best LS results from the benchmark [4]. Environments with non-strictly convex Pareto fronts better highlight the unique advantages of our method, which is to overcome the limitations of existing LS methods. FruitTree, for example, illustrates the fundamental limitations of LS methods, as analyzed in Section 5.1 and shown in Figure 4. --- Once again, we thank you for your time and engagement. The discussion has been constructive. Please let us know if you have any remaining concerns. --- > ## References [1] Bai et al., “Joint Optimization of Concave Scalarized Multi-Objective Reinforcement Learning with Policy Gradient Based Algorithm,” JAIR 2022 [2] Zhou et al., “Anchor-Changing Regularized Natural Policy Gradient for Multi-Objective Reinforcement Learning,” NeurIPS 2022 [3] Lu et al, "Multi-Objective Reinforcement Learning: Convexity, Stationarity and Pareto Optimality" ICLR 2023 [4] Felten et al. "A Toolkit for Reliable Benchmarking and Research in Multi-Objective Reinforcement Learning" NeurIPS 2023 [5] Andrychowicz et al, "Hindsight Experience Replay" 2017 [6] Hung et al. "Q-Pensieve: Boosting Sample Efficiency of Multi-Objective RL Through Memory Sharing of Q-Snapshots" ICLR 2023 [7] Boyd et al. "Convex Optimization" 2004
null
null
null
null
null
null
TimeFilter: Patch-Specific Spatial-Temporal Graph Filtration for Time Series Forecasting
Accept (poster)
Summary: This paper introduces a novel dependency modeling strategy for time series forecasting, distinct from the commonly used channel independent (CI) and channel dependent approaches (CD). The proposed Patch-wise Filtration method balances CI and CD, allowing for fine-grained and dynamic modeling of spatiotemporal relationships between patches while considering computational complexity. The experimental results look convincing, and the ablation studies enumerate several alternative solutions, demonstrating the benefits of TimeFilter's filtering mechanism based on the MoE architecture. Claims And Evidence: This paper asserts that the relationships between patches become effective or ineffective over time, which is difficult to capture using channel-wise modeling methods. First, this claim is intuitive. Moreover, the example illustration in the Introduction and the experimental figures in the Case Study clearly validate this assertion. Additionally, the visualization of the four modeling approaches highlights the differences between various granularity-based modeling methods. Methods And Evaluation Criteria: This paper introduces a novel dependency modeling approach, offering a fresh perspective to the time series forecasting community. The usage of datasets and metrics follows the standard practices in previous works. Theoretical Claims: There is no theoretical claim in this paper. Experimental Designs Or Analyses: The experiments in this paper are comprehensive, covering both short-term and long-term forecasting tasks, ablation studies, case studies, analysis of training time and memory consumption, as well as the impact of the lookback horizon. The above results look convincing and I cannot find major flaws. Supplementary Material: Yes, I went through the appendix. The detailed metrics of the main experiments and the error bars presented in the appendix are beneficial for better presentation. Relation To Broader Scientific Literature: The choice between CI and CD, and how to better model dependencies for temporal representation, is always a hot topic in time series community. The findings and implementation in this paper can be conducive to tasks such as time series forecasting, imputation, classification, and anomaly detection. Essential References Not Discussed: This paper has a very good coverage of the literature. Other Strengths And Weaknesses: Strengths: 1. The proposed framework is a new perspective in time series dependency modeling, which performs well in terms of both forecasting performance and efficiency. 2. This paper conducts extensive experiments. 3. The writing is clear and the figures are pleasing. 4. The paper consistently presents clear intuitions and thoughtful design choices. 5. I especially appreciate the case study in the paper, which draws a conclusion to the continuous debate between CI and CD methods in recent years. Weaknesser: 1. The theoretical complexity of this method is quadratic, which does not offer the low memory footprint characteristic of linear-based methods. 2. Though not being a deal breaker, the paper lacks a clear description of the hyperparameter tuning process, especially regarding the patch length parameter. In the appendix, the chosen patch lengths vary significantly across different datasets, raising questions about how this hyperparameter is determined. 3. The idea of masking (or pruning) dependencies has existed before, although not as refined or effective as presented in this paper. Other Comments Or Suggestions: N.A. Questions For Authors: 1. The authors convert each patch into an ego graph and prune it, claiming that this operation can be parallelized without introducing additional complexity. However, I am curious about how exactly this parallelization is achieved, as the paper does not provide a clear explanation. 2. Masking attention can achieve similar effects. Why did the authors opt for a pure GNN architecture instead? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful advice for polishing our manuscript. We have conducted sufficient experiments and analysis to dispel your concerns. The details can be found below. ## Other Strengths And Weaknesses --- **`W1`: Alalysis of theoretical complexity.** **`R1`:** We include a detailed comparison of the theoretical complexity of our method and other SOTA models. The Tab. below summarizes the theoretical computational complexity, where C is the number of channels, L is the input lengths, and P is the patch length. |TimeFilter|iTransformer|PatchTST|Crossformer| |-|-|-|-| |$O(C^2·(\frac{L}{P})^2)$|$O(C^2)$|$O(C·(\frac{L}{P})^2)$|$O(\frac{C}{P^2}·L^2)$| Moreover, theoretical complexity alone cannot fully capture real-world performance due to implementation differences. We tested on one NVIDIA A100 GPUs, measuring training (1 epoch) and inference times for three datasets of increasing size, with results averaged over 5 runs in tha table below. |||TimeFilter|iTransformer|PatchTST|Crossformer| |-|-|-|-|-|-| |Weather ($C=21$)|Training|19s|23s|41s|74s| ||Inference|4s|6s|9s|11s| |Electricity ($C=321$)|Training|91s|74s|280s|248s| ||Inference|14s|11s|36s|34s| |Traffic ($C=862$)|Training|77s|101s|352s|360s| ||Inference|14s|18s|66s|81s| --- **`W2`: Description of the hyperparameter tuning process, especially the patch length parameter.** **`R2`:** We conducted an extensive hyperparameter search covering learning rates from $5\times 10^{-4}$ to $10^{-3}$, encoder layers from $1$ to $3$, $d_{model}$ values from $32$ to $512$, training epochs from $10$ to $30$, and patch lengths from $2$ to $96$. In particular, the patch length $P$ has a significant impact on the results as it determines the contextual semantic information captured. We chose $P$ based on the characteristics of the dataset, experimenting with values of $L, \frac{L}{2}, \frac{L}{3}$, etc. This data-driven, result-oriented tuning ensures optimal performance. For transparency, we'll describe this process in detail in the appendix. --- **`W3`: Difference with other masking (or pruning) dependencies methods.** **`R3`:** While previous work has explored dependency masking, our approach differs itself by filtering based on dependency types rather than binary causal relationships. Traditional methods often mask dependencies by assuming fixed causal relationships, which can be misleading due to spurious correlations in short-term dependencies. Our innovation lies in recognizing that different domains require distinct dependency types. For example: Traffic flow prediction benefits from spatial dependencies (e.g., interactions between adjacent road segments); financial markets often require filtering out short-term noise while preserving long-term trends; electricity demand forecasting relies heavily on periodic dependencies tied to daily/seasonal cycles. The significant performance gains in our experiments validate the effectiveness of this type of dependency filtering and demonstrate its superiority over generic causal masking strategies. ## Questions For Authors --- **`Q1`: Details of parallel pruning.** **`R1`:** Sorry for the confusion. For each patch i, its ego graph corresponds to a row in the overall adjacency matrix M. Pruning is achieved by directly modifying M. For the three filters, we pre-generate masks. Then, by element-wise multiplication of M with the selected mask via Einstein summation, we efficiently parallelize the filtering without any additional complexity. We'll clarify this in the revised manuscript. --- **`Q2`: Why choose a pure GNN architecture rather than attention?** **`R2`:** We appreciate this insightful technical question. The choice of a GNN-based architecture over masked self-attention is motivated by both computational efficiency and inherent advantages in modeling temporal dependencies. GNNs offer lower complexity at $O(Qm)$ where m, pruned to O(kn), is the number of edges, compared to Transformers' quadratic $O(n^2)$ complexity (Q layers, m edges, n nodes). GNNs are inherently adept at modeling instance relationships through explicit graph structures, preserving inductive bias and offering better interpretability than the often opaque attention weights. We'll ensure these advantages are clearly articulated in the revised manuscript. We hope our response addresses your concerns.
Summary: This paper proposes a Patch-wise filtering modeling approach to select important dependencies and remove irrelevant noisy relationships. It integrates the benefits of CI and CD strategies and offers a more fine-grained and adaptive consideration of dynamically evolving dependencies over time compared to the CC strategy. The paper conducts extensive experiments, demonstrating significant improvements across various lookback lengths. Additionally, detailed ablation studies are provided. Overall, the paper is of good quality. Claims And Evidence: The paper substantiates its claim of dynamic, fine-grained relationships between patches by visualizing the dependency graphs and the distribution of gating mechanisms during the modeling process. Methods And Evaluation Criteria: The proposed method, TimeFilter, offers a new and better alternative to the conventional modeling approaches of CI and CD. Like many other papers, the paper also employs commonly used time series forecasting datasets such as ETT, Traffic, and so on, as well as the widely adopted error metrics of MSE and MAE. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The experimental design in this paper is more interesting compared to other studies. In addition to using fixed lengths, the authors introduced extended experiments from the perspective of the scaling law of TSF. The ablation studies compared TimeFilter with other intuitive filtering methods to demonstrate the rationality of their design. They also provide visualisations to illustrate the performance improvements brought by the proposed modules. The code is provided, too. Supplementary Material: The Supplementary Material of this paper, like that of many others, provides a detailed introduction to the data, comprehensive experimental results, and visualizations of the prediction performance. The inclusion of error bars clearly demonstrates the significant improvement of TimeFilter. Relation To Broader Scientific Literature: In addition to TSF, other domains such as spatio-temporal prediction, speech and acoustics also involve similar fine-grained, dynamically evolving dependencies. This paper offers new insights into leveraging such dependencies. Essential References Not Discussed: I think there are no essential references that have not been discussed. Other Strengths And Weaknesses: Strengths: 1. The paper is well-motivated, as the challenge of effectively modeling both temporal and inter-channel relationships. 2. This paper is well written. The notations are clear. 3. The experiments are comprehensive, and the compared baselines are up-to-date. Weaknesses: 1. Some modules exhibit limited novelty. For instance, the idea of Dynamic Expert Allocation is not first proposed in this paper. 2. It would be better if the authors also visualize the dependency graphs of the other baselines. Other Comments Or Suggestions: 1. In Equation 14, the specific operation of **COMB** in TimeFilter is not clearly explained. 2. In Equation 18, the value of λ1 and λ2 are not specified. Questions For Authors: Why is the dependency modeling graph of TimeFilter in Figure 2(c) not symmetric along the diagonal? Shouldn’t the relationship between patch i and patch j be filtered simultaneously in both directions, removing the influence of patch i on patch j and vice versa? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful advice. Here are responses to your questions: ## Other Strengths And Weaknesses **`W1`: Some modules exhibit limited novelty. For instance, the idea of Dynamic Expert Allocation is not first proposed in this paper.** **`R1`:** We sincerely appreciate your deep expertise in identifying this important aspect. Dynamic MOE's adaptive token-wise expert allocation to address predetermined expert selection constraints, our TimeFilter does indeed build on this previous work[1,2], but we would like to clarify that our paper does not claim this as our primary contribution. Our key innovation lies in fine-grained dependency modeling beyond CD/CI, which can be realized through various mechanisms. Dynamic MOE serves as our implementation choice, but alternative approaches like attention-gated dependency routers or hierarchical graph convolutions with adaptive receptive fields could also instantiate our core paradigm. The novelty lies in the proposed dependency disentanglement framework rather than the specific routing mechanism. [1] Harder Tasks Need More Experts: Dynamic Routing in MoE Models. [2] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models. --- **`W2`: It would be better if the authors also visualize the dependency graphs of the other baselines.** **`R2`:** We've visualized PatchTST's and iTransformer's attention on Weather dataset in https://i.postimg.cc/Jn42zB0b/rebuttal-fig.png. TimeFilter achieves more precise dependency modeling by selectively focusing on specific types of relationships, unlike PatchTST's narrow focus on mutation points and iTransformer's overly generalized attention across periods. This selective approach allows TimeFilter to better identify meaningful interactions while filtering out noise. ## Other Comments Or Suggestions --- **`C1`: Explanation of the **COMB** operation in TimeFilter.** **`R1`:** Sorry for the confusion. COMB refers to the combination of ego and neighbour representations. We implement COMB using a simple FFN, similar to GCN [1]. We'll clarify this in the revised manuscript for better understanding. [1] Semi-Supervised Classification with Graph Convolutional Networks. --- **`C2`:** Hyper-parameters explanation. **`R2`:** In Equation 18, $\lambda_1$ and $\lambda_2$ are scaling factors for the loss functions. In our experiments, we set $\lambda_1$ to 0.05 and $\lambda_2$ to 0.005. These values ensure the three loss functions are balanced, facilitating gradient optimization. ## Questions For Authors --- **`Q1`: Why is the dependency graph of TimeFilter in Figure 2(c) not symmetric along the diagonal?** **`R1`:** The dependency modeling graph in Figure 2(c) is not symmetric because each patch has its own customized dependency relationships. Specifically, the filtering process for patch i's ego graph is independent of patch j's, and the selected filters may differ between them. This asymmetry is intuitive in real-world scenarios. For example, in financial systems, macroeconomic policy factors can directly influence price-volume factors (e.g., a positive policy change boosts market activity), but the reverse influence (e.g., price-volume changes trigger policy adjustments) is often weaker or absent. Thus, asymmetric dependencies naturally arise in such contexts. --- Once again, we are deeply grateful for your recognition of our work and your constructive feedback. If you have any further questions or suggestions, we would be more than happy to address them.
Summary: This paper introduces a novel approach that addresses the limitations of channel-independent (CI) and channel-dependent (CD) and channel-claustering (CC) strategies. The proposed TimeFilter transition from previous coarse-grained, channel-wise clustering approaches to a finer-grained, patch-wise partitioning strategy. Specifically, TimeFilter constructs a spatial-temporal graph based on patch-level distances by k-Nearest Neighbor methods. Moreover, a mixture-of-experts mechanism is applied to dynamically route and filter dependencies for each time patch. Adaptive Graph Learning (AGL) Module updates the time series embedding through neighborhood aggregation and performs forecasting. Extensive experiments show that TimeFilter generally outperforms other baselines, validating the effectiveness of the proposed TimeFilter framework. Claims And Evidence: - The paper claims appropriate dependency modeling strategies, However, there is no direct quantitative breakdown of how much each component of the constructed dependency graphs contributes to the performance improvement. There is a noticeable lack of ablation studies on the importance of temporal, spatial, and spatial-temporal subgraphs. - The paper shows the impact of the filtering component. However, the definition of "irrelevant" is implicit in the model's learning process. Visualizations or statistical analysis of the filtered connections could be beneficial. Methods And Evaluation Criteria: The proposed TimeFilter and the decomposition of temporal, spatial, and spatial-temporal subgraphs are reasonable for cross-channel time series modeling. Discussions exploring real-world scenarios and specific use cases where these three graph types prove necessary would be beneficial. Theoretical Claims: There is no proof for theoretical claims in the paper. Experimental Designs Or Analyses: The paper uses standard benchmark datasets for time series forecasting and includes a good range of baseline methods. Supplementary Material: Yes. I reviewed the supplementary material. Relation To Broader Scientific Literature: The paper clearly articulates the limitations of existing CI, CD, and CC methods and unifies these channel strategies in Figure 2. TimeFilter achieves the best trade-off between complete separated and complete collaborative channel dependencies. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Pros: - The motivation for dependency filtering and graph modeling for cross-channel time series forecasting is clear and reasonable. - The paper is well-written, clear and easy to follow. - Codes are available, which promotes the reproducibility of the work. Cons: - The sensitivity of hyper-parameters (e.g., number of nodes $n$, threshold $p$) is unclear. There is a lack of detailed ablation study on these necessary hyper-parameters. - The claim of "Paradigm Transformation" might be seen as exaggerated. While the method is somewhat novel, it builds upon existing research in time series channel strategies and graph neural networks. Other Comments Or Suggestions: N/A Questions For Authors: - What is the computational cost of graph construction, compared with other modules in TimeFilter? - In Figure 4, the scores for the 3 filter for each dependency subgraph are close to 0. Does that mean $m=2$ is sufficient for TimeFilter modeling? - What is the complexity of each component in the proposed TimeFilter? There is no formal analysis of the model's complexity. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Here are detailed responses to your questions: ## Claims And Evidence **E1:** Claims of appropriate dependency modeling strategies. **R1:** We conducted ablation studies in the table below with variants: tem.-only (T), spa.-only (S), spa.-tem.-only (ST), and their combinations without filtering. The results show that no single dependency combination outperforms others across datasets. The context-aware filtering mechanism reduces prediction errors by adaptively suppressing noise. This confirms that TimeFilter’s performance results from the principled fusion of complementary dependencies, not isolated components. We will expand this analysis in revision. ||TimeFilter|T|S|ST|T&S|T&ST|S&ST|T&S&ST| |-|-|-|-|-|-|-|-|-| |Weather|**0.239**/**0.269**|0.242/0.272|0.241/0.272|0.241/0.271|0.241/0.271|0.241/0.271|0.241/0.272|0.243/0.274| |ECL|**0.158**/**0.256**|0.168/0.265|0.170/0.269|0.162/0.260|0.171/0.271|0.162/0.260|0.163/0.261|0.166/0.264| **E2:** Irrelevant dependency. **R2:** "Irrelevant dependencies" refer to transient interactions caused by non-stationary patterns (extreme values, missing records, or noise) that lead to unstable statistical associations. As shown in https://i.postimg.cc/Jn42zB0b/rebuttal-fig.png, TimeFilter captures finer dependencies, distinguishing irrelevant ones. In contrast, PatchTST focuses on mutation points, and iTransformer spreads attention too broadly. This evidence shows how unfiltered dependencies propagate irrelevant ones, while our filtering mechanism preserves crucial context-aware relationships. ## Methods And Evaluation Criteria **M1:** Discussions exploring real-world scenarios. **R1:** We provide real-world examples based on intuition: user behavior with low inter-channel correlation benefits from tem. dependencies, while spa. dependencies are key for real-time monitoring. In traffic flow prediction, spa.-tem. dependencies capture dynamic interactions across locations. For more details, refer to Section 4.2 (Routing Network, lines 205-233). ## Weaknesses **W1:** Parameter Sensitivity. **R1:** The number of nodes $n$ is set as $n = C \times \lceil L/P \rceil$, where $P$ is the patch length. The result in table below shows that performance varies with $P$, as different patch lengths capture distinct temporal semantics based on the dataset's characteristics. The threshold $p$ influences the number of favored dependencies. Except for $p = 1.0$ (full retention), the model already removes irrelevant dependencies, making it less sensitive to $p$. ||TimeFilter($p=0.5$)|$P=96$|$P=48$|$P=32$|$P=16$|$p=0.3$|$p=0.9$| |-|-|-|-|-|-|-|-| |Weather|**0.239**/**0.269**|0.242/0.271|**0.239**/**0.269**|0.243/0.272|0.245/0.275|0.241/0.272|0.240/0.270| |ECL|**0.158**/**0.256**|0.163/0.259|0.166/0.263|**0.158**/**0.256**|0.160/0.258|0.162/0.258|0.159/0.257| |Traffic|**0.407**/**0.268**|**0.407**/**0.268**|0.427/0.278|0.433/0.283|0.430/0.281|0.415/0.276|0.413/0.272| **W2:** Paradigm Transformation. **R2:** Thank you for recognizing the novelty of our work. We’ll change "Paradigm Transformation" to "Novel Paradigm" and acknowledge prior channel strategies and GNNs. ## Questions **Q1&Q3:** Complexity. **R1:** We compare the theoretical complexity of TimeFilter with other Transformer-based models in the table below. $C$ is the number of channels, $L$ is the input length, and $P$ is the patch length. |TimeFilter|iTransformer|PatchTST|Crossformer| |-|-|-|-| |$O(C^2·(\frac{L}{P})^2)$|$O(C^2)$|$O(C·(\frac{L}{P})^2)$|$O(\frac{C}{P^2}·L^2)$| The graph construction and filtering modules have complexity $O((\frac{CL}{P})^2)$, and the GNN module is $O(CL)$. However, theoretical complexity alone doesn't fully capture real-world performance, as shown by testing on a NVIDIA A100 GPU with training (1 epoch) and inference times averaged over 5 runs: |||TimeFilter|iTransformer|PatchTST|Crossformer| |-|-|-|-|-|-| |Weather ($C=21$)|Training|19s|23s|41s|74s| ||Inference|4s|6s|9s|11s| |Electricity ($C=321$)|Training|91s|74s|280s|248s| ||Inference|14s|11s|36s|34s| |Traffic ($C=862$)|Training|77s|101s|352s|360s| ||Inference|14s|18s|66s|81s| Despite higher theoretical complexity, TimeFilter outperforms PatchTST in training and inference speed. This efficiency gain likely comes from structured sparsity in filtering, where einsum-based operations produce sparse matrices, reducing FLOPs and memory overhead. **Q2:** Is $m=2$ sufficient for modeling? **R2:** The value of $m$ is not a fixed hyperparameter. It represents the number of filter types, determined dynamically by the confidence threshold ($p$). Typically, one or two filters exceed $p$, so the dynamic routing mechanism selects one or two dependencies. When all three filters are active, it indicates a need for full channel dependency structures, though only a few patches require all dependencies. Thank you again for your careful review and constructive suggestions, which have inspired us to improve our paper further.
Summary: In this paper, the authors propose to imporve multivariate time series (MTS) forecasting by proposing the TimeFilter framework, which introduces patch-specific spatial-temporal graph filtration to model dynamic dependencies. Traditional MTS forecasting approaches either follow a channel-independent (CI) approach, ignoring inter-channel dependencies, or a channel-dependent (CD) approach, which captures all dependencies indiscriminately, often leading to noise and reduced robustness. The authors argue that both methods have limitations, particularly in capturing time-varying dependencies. To address this, TimeFilter employs a fine-grained dependency modeling technique that filters out irrelevant correlations and retains only the most significant ones in a patch-specific manner. The framework is based on a graph neural network (GNN) architecture and dynamically adjusts its approach to each dataset's characteristics, using a mixture of experts mechanism. Extensive experiments across several real-world datasets demonstrate that TimeFilter outperforms state-of-the-art methods in both long- and short-term forecasting tasks. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: One of the main concern is the limited contribution of this paper. The proposed idea is conceptually similar to existing Granger causality-based methods [1, 2] and instantaneous time series [3], yet the authors fail to discuss this connection. Specificially, the temporal dependencies and spatial-temporal dependencies can be described by Granger Causality, and the inter-channel (spatial) dependencies can be described as instantaneous dependencies. [1] Marcinkevičs, Ričards, and Julia E. Vogt. "Interpretable models for granger causality using self-explaining neural networks." arXiv preprint arXiv:2101.07600 (2021). [2] Tank, Alex, et al. "Neural granger causality." IEEE Transactions on Pattern Analysis and Machine Intelligence 44.8 (2021): 4267-4279. [3] Lippe, Phillip, et al. "Causal representation learning for instantaneous and temporal effects in interactive systems." arXiv preprint arXiv:2206.06169 (2022). Theoretical Claims: The paper lacks theoretical grounding or in-depth analysis to explain why the proposed method leads to performance improvements. A deeper understanding of the underlying mechanisms would clarify and strengthen the contribution. Experimental Designs Or Analyses: The proposed method is very similar to [4], but the authors do not consider it as baseline. [4] Zhao, L. and Shen, Y. Rethinking channel dependence for multivariate time series forecasting: Learning from leading indicators. In The Twelfth International Conference on Learning Representations, 2024. Supplementary Material: Yes Relation To Broader Scientific Literature: N.A Essential References Not Discussed: Please refer to "Methods And Evaluation Criteria" and "Experimental Designs Or Analyses" Other Strengths And Weaknesses: N.A. Other Comments Or Suggestions: N.A. Questions For Authors: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Here are responses to your insightful concerns and questions: --- **`Q1`: Novelty and Contribution. (The proposed idea is conceptually similar to existing Granger causality-based methods [1, 2] and instantaneous time series [3], yet the authors fail to discuss this connection.)** **`R1`:** We would like to clarify that our core innovation lies in proposing a novel fine-grained dependency modeling paradigm. Existing methods predominantly focus on channel-wise correlations at a coarse granularity, whereas our approach explicitly models time-varying fine-grained dependencies that cannot be effectively captured by channel-wise interactions alone. The foundation motivation of TimeFilter stems from the following observation: Real-world time series often exhibit hybrid characteristics containing both causal signals (spa.-tem.&spa.) and inertial signals (tem.). However, the global time series signals do not conform to the causal, auto-generative nature assumed in modeling, but are influenced by unobserved factors [5]. Such forecasting are similar to the natural language understanding (NLU), where integrating all latent signals is more crucial. As the reviewer pointed out, Granger causality[1,2] only accounts for temporal inertia and spatio-temporal causality, while the instantaneous effects[3] only capture spatial causality. In contrast, TimeFilter integrates all potential signals during graph construction and then adaptively filters out invalid signals under specific segments, achieving superior forecasting capability. To better demonstrate our novelty, we provide a comprehensive comparison across critical dimensions in the following table. (C channels, L lookback horizon, D hidden dimensions and K lead-lag steps). ||TimeFilter|Granger causality|Instantaneous effects| |-|-|-|-| |Signals|Tem.&Spa.&Spa.-Tem.|Spa.-Tem.&Tem.|Spa.| |Granularity|Patch-wise|Channel-wise|Channel-wise| |Assumption|None.|Static|Synchronization| |Robustness|Yes|No|No| |Complexity|$O(C^2(\frac{L}{P})^2)$|$O(C^2K+LCD+LD^2)$|$O(LC^3)$| [5] Time Series Prediction: Forecasting The Future And Understanding The Past [6] MTEB: Massive Text Embedding Benchmark --- **`Q2`: Analysis of why TimeFilter improve the performance. (The paper lacks theoretical grounding or in-depth analysis to explain why TimeFilter leads to performance improvements.)** **`R2`:** We appreciate this insightful question regarding the validation of the effectiveness of our method. The core mechanism can be explained from two complementary perspectives: - **Theoretical foundation**: As noted in Q1, our filtering mechanism addresses the spurious regression artefacts prevalent in dependency modeling. Specifically, while some signals provide genuine predictive patterns, others exhibit localized spurious correlations arising from short-term segment analysis - a phenomenon particularly pronounced in non-stationary time series [7]. TimeFilter's dynamic pruning acts as a variance reduction operator, suppressing these transient noise signals while preserving true causal relationships. - **Empirical verification**: As shown in https://i.postimg.cc/Jn42zB0b/rebuttal-fig.png, we quantitatively demonstrate that in the Weather dataset. The filtered adjacency matrix exhibits clearer cluster structures aligned with physical sensor relationships. This fine-grained opeartion allows it to focus on high-spatial and high-temporal-spatial correlations (e.g., Fea0-Fea1 and Fea1-Fea3) while eliminating irrelevant dependencies (e.g., Fea0-Fea3). This dual validation by both causal theory and data-driven evidence confirms that our adaptive filtering successfully disentangles persistent patterns from transient noise, directly contributing to the observed performance gains. [7] Spurious Correlations in High Dimensional Regression: The Roles of Regularization, Simplicity Bias and Over-Parameterization. --- **`Q3`: Compare with LIFT[4]. (The proposed method is very similar to [4], but does not consider it as a baseline.)** **`R3`:** LIFT[4] is added into the baseline. Our method still outperforms it (PatchTST+LIFT) and we will include LIFT in the revised version. While LIFT models lead-lag dependency modeling through lead estimation and refinement, it is sensitive to data distribution and less effective for synchronous data without significant lead-lag relationships. TimeFilter enhances such generalization ability and customizes dependency types for datasets from different domains. ||TimeFilter|LIFT| |-|-|-| |Metric|MSE/MAE|MSE/MAE| |Weather|0.216/0.258|0.229/0.262| |ECL|0.150/0.246|0.158/0.252| |Traffic|0.360/0.254|0.386/0.260| --- Overall, thanks again for your valuable comments. We hope the detailed experiments and clarifications provided above address your concerns. If you have any further questions, please feel free to reach out, and we would be happy to provide additional clarifications.
null
null
null
null
null
null
Low-distortion and GPU-compatible Tree Embeddings in Hyperbolic Space
Accept (poster)
Summary: The Authors propose a pipeline to embed trees in hyperbolic space with minimal distortion yet with the ability to operate on accelerated GPU by the generalised Dalaunay embedding algorithm with minimal angle maximization and providing a solid framework to essentially increase the precision by using multiple float data even when the precision of each variable is limited. ## update after rebuttal. The authors have provided acceptable evidence that embedding pure trees can contribute to the ICML community, e.g., through action search. It should be included in the manuscript, but it can be easily done in the camera-ready version. I have raised the score to Weak Accept. Claims And Evidence: The goal of the paper (low-distortion with fixed length float numbers, exploiting hyperbolic space with a general dimensionality) itself is simple, and the proposed method is designed to directly solve the issue. Algorithm 1 is a natural generalisation of Sarker's embedding in high-dimensional hyperbolic space. Applying the minimal angle maximization is a natural and direct idea to put the children of a node uniformly. Using multiple fixed precision float numbers also directly solves the issue of the lack of precision required by hyperbolic embedding. They are both convincing. Methods And Evaluation Criteria: As discussed above, the proposed method succeeds in solving existing issues of hyperbolic embedding of a tree. Theoretical Claims: The statement of Theorem 4.2. must be rewritten. "Some $\\epsilon^*$" makes no sense mathematically. If some argued $\\epsilon^*=10^{300}$ is still small, there would be no practical implication of Theorem 4.2. If it were a function of $\\epsilon$, $t$, or machine epsilon, it might have some practical implication. Perhaps, the Authors wanted to say that we can let $\\epsilon^*$ be 16 times the machine epsilon, according to the proof? Theorem 4.4. has the same issue. Experimental Designs Or Analyses: The experiments are designed soundly so that they can directly evaluate how well the proposed method achieved the initial goal of the paper. Supplementary Material: I read the supplementary material to understand the claim of the theorems. Relation To Broader Scientific Literature: The paper might have more contributions to the data compression or data structure contexts. Essential References Not Discussed: Nothing. Other Strengths And Weaknesses: As a computer science paper, the paper's contribution is solid and almost complete. Having said that, regrettably, I need to flag that the current manuscript of the paper does not clarify the contribution to the ICML community, where the main focus is on machine learning, as its name suggests. Many hyperbolic representation learning methods, represented by (Nickel and Kiela 2017), the motivation was in learning some representations of a graph, typically not a tree strictly but approximated by a tree, and those obtained representations could be used for link prediction. This is a procedure of retrieving some initially unknown information from data, which can be called "learning." However, the proposed method only considers where **the complete information of a pure tree** is available. Where we have the complete information of a pure tree and obtain perfect representations in hyperbolic space that can recover the original distance structure, what do we "learn" by that? In other words, why are we not satisfied with the original tree? Actually, I do not see any information that we can obtain from the Authors' method, compared to the original tree. Hence, I do not dare to call the Authors' method "machine learning." Recall that (Sarker 2011) was presented in the graph drawing context, and (Sala et al., 2018) included hyperbolic MDS, which has room for being called "machine learning." Obviously, even if some work is not directly related to machine learning, provided that it has the potential to contribute to the machine learning community, ICML must be capable of accepting it as long as the potential is clarified in the paper. However, the Authors' current manuscript does not clarify how it can contribute to the community. In fact, the Authors' work has the potential to contribute to the machine-learning community. The perfect representations can provide the distance between two nodes at almost constant time (though it depends on the precision of float and dimension, strictly speaking) while the space complexity is almost linear to the number of nodes (again, it depends on the precision of float and dimension). This is in contrast to the naive distance matrix implementation needing computational costs quadratic to the number of nodes. This must be an attractive *data structure*, if not machine learning itself, for machine learning on tree data. Nevertheless, the current manuscript does not stress such perspectives. It requires rewriting from scratch, which is not what the rebuttal period aims at. Other Comments Or Suggestions: Considering the solid contribution of the Authors' work to computer science, I humbly suggest the following: - The most straightforward way is to submit the manuscript to another venue (it requires withdrawal from ICML), which can appreciate the Authors' contribution better. In any case, I encourage the Authors to upload the work to some online repositories to secure the priority right if they have not done so. - If the Authors want to present their work on machine learning venues, rewrite it from scratch so that the contribution to the community is clear. In any case, the competitors would be data structures, like the distance matrix, naive implementation of a tree as a graph, etc., rather than existing hyperbolic representations. Questions For Authors: Could you formalize the statements of Theorems 4.2. and 4.4? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their kind remarks regarding the contribution of our paper and their recognition of the convincing nature of our method. Below we address the main concern of the reviewer, namely the potential and relevance to the machine learning community. **Relevance to the machine learning community.** In our view, there are three important reasons why this paper should be published in a machine learning conference such as ICML. 1. This paper follows a long tradition of hyperbolic embedding papers published in machine learning conferences. As mentioned by the reviewer, (Sala et al., 2018) is closest to our work and was previously published in ICML. Their h-MDS performs an eigenvalue decomposition on the pairwise distance matrix and projects the resulting eigenvectors to the hyperboloid to obtain embeddings, hence also not making use of any learning algorithms. The high impact of (Sala et al., 2018) serves as an indication of the potential of hyperbolic tree embeddings within our field. Other hyperbolic embedding papers in machine learning conferences include (Nickel and Kiela, 2017; Ganea et al., 2018; Yu et al., 2022b), each of which uses a learning approach to obtain embeddings. Although these methods all employ some type of learning, they achieve considerably worse results than the constructive approaches. Therefore, we think it is important that our constructive approach is published within the machine learning community. 2. We believe that the primary relevance of hyperbolic embeddings to the machine learning community lies in their potential for downstream deep learning applications. In recent years, a common approach for incorporating hierarchical knowledge into deep learning models is by placing the hyperbolic embeddings of the hierarchies on top of neural networks. Then, the node embeddings serve as prototypes for learning. See for example (Liu et al., 2020; Long et al., 2020; Yu et al., 2022b). In other words, hyperbolic embeddings are already an active part of the deep learning pipeline. Lower distortion means better embeddings which leads to better results in hierarchical deep learning. Hence it is important for us to share our findings in a machine learning venue to maximize impact. 3. We believe FPEs to be an important direction for the field of hyperbolic machine learning. Nearly every paper on hyperbolic learning mentions the numerical problems that arise from attempting to do computation in hyperbolic space, with a particularly compelling analysis of the numerical stability by [1], also published at ICML. They show that the maximally representable subset of hyperbolic space in the commonly used models is severely limited in its radius. FPEs alleviate these problems, significantly increasing this radius. We think our work will be impactful to hyperbolic machine learning by shining a light on the potential of FPEs while also providing an implementation of the current state-of-the-art routines on this type of arithmetic that is compatible with the most commonly used deep learning framework, i.e., PyTorch. This provides a starting point for exciting future work on hyperbolic machine learning with higher precision, while mantaining GPU-compatibility. Based on these, we believe our work will be most impactful if it is published at ICML where it can reach its intended target audience: the hyperbolic machine learning community. We hope that the reviewer will appreciate and agree with our motivations. We will make sure to better clarify these important points in the paper. [1] Mishne, Gal, et al. "The numerical stability of hyperbolic representation learning." ICML. 2023. **Corrections to Theorems 4.2 and 4.4.** The reviewer is correct in that we intend $\epsilon^*$ to be in the order of the machine epsilon. We apologize for the lack of clarity and have changed the text accordingly. --- Rebuttal Comment 1.1: Comment: Thank you for your comments. Unfortunately, I do not think your reply and references directly answer my concern because they are not directly related to what you are doing: converting a pure tree to points in hyperbolic space where no cycle is allowed. Simply, how can converting a pure tree (without any cycle allowed) to points in hyperbolic space contribute to the ICML community? Please distinguish a tree-**like** structure and a **pure** tree since your method only applies to a pure tree. --- Reply to Comment 1.1.1: Comment: Hyperbolic embeddings of pure trees have been actively used in deep learning, making it of high relevance for the ICML community. A common approach is to use the pure tree as prior knowledge for a down-stream task and use the hyperbolic embedding of the tree as prototypes on top of any neural network. For example, (Yu et al., 2022b) use the hyperbolic embedding of a hierarchical classification system of skin lesions, given as a pure tree, to improve skin lesion recognition. (Long et al., 2020) and (Ghadimi Atigh et al., 2021) both use the embedding of a hierarchy of actions, also given as a pure tree, in hyperbolic space to improve action recognition. Other examples include [1], which uses the embedding a taxonomy of butterflies in the form of a pure tree on top of a neural network, or [2], which uses hyperbolic tree embeddings to improve explainability in action recognition. To clarify, our method does apply to tree-like structures as well, as shown in the experiments in Appendix M, where we have applied our method to the Diseases and CS PhDs graphs. To use our method on such structures, we combine HS-DTE with some method for embedding a tree-like graph into a tree such as (Abraham et al., 2007). This is the same approach that (Sala et al., 2018) take to apply their construction to tree-like structures. Therefore, methods that use the hyperbolic embedding of tree-like structures can also benefit from our method. There are several examples of such papers, such as [3], which uses a continuous graph embedding to improve aggregation across nodes of the embedded graph in GCNs. (Liu et al., 2020) uses a hyperbolic embedding of the WordNet noun hierarchy to improve zero-shot recognition. Another example can be found in [4], which uses a hyperbolic graph embedding to improve zero-shot learning. We would also like to point out that our theory and implementations for FPEs form a basis for building GPU-compatible neural networks with higher precision in general. The potential of such networks is significant, particularly in hyperbolic machine learning, where nearly every method suffers from the numerical problems that originate from a lack of precision and which are extensively analyzed in [5]. Therefore, we believe our paper to be relevant to the machine learning community not only due to the improved tree and tree-like graph embeddings, but also because it paves the road for higher precision neural networks. Following the guidance of the reviewer, we will update the paper to highlight the importance of tree embeddings in hyperbolic space for machine learning. [1] Dhall, Ankit, et al. "Hierarchical image classification using entailment cone embeddings." CVPR workshops, 2020. [2] Gulshad, Sadaf, Teng Long, and Nanne van Noord. "Hierarchical explanations for video action recognition." CVPR, 2023. [3] Pei, Hongbin, et al. "Geom-gcn: Geometric graph convolutional networks." ICLR, 2020. [4] Xu, Yan, et al. "Meta hyperbolic networks for zero-shot learning." Neurocomputing 491 (2022): 57-66. [5] Mishne, Gal, et al. "The numerical stability of hyperbolic representation learning." ICML, 2023.
Summary: Existing combinatorial approaches to embedding trees in hyperbolic space suffer from issues stemming from (1) the difficulty of spreading out points on a hypersphere, and (2) floating-point precision issues. To address issue (1), the authors propose highly-separated Delauney tree embeddings (HS-DTE), which maximize the minimal angle between two leaves, unlike previous approaches which optimize the mean pairwise angles. To address (2), the authors present HypFPE, a modernized approach to floating point expansion with special attention paid to the hyperbolic distance function. The authors demonstrate the superiority of their methods against comparable approaches for a variety of real and empirical trees. ## Update after rebuttal I maintain my recommendation to accept this paper. A common theme among other reviews is that, while the paper itself is of high quality, its relevance to the ICML readership is limited. I would like to offer a dissenting opinion: as the authors themselves argue better than I could in their rebuttal to Reviewer uxsR, there is substantial precedent for machine learning venues publishing work pertaining to tree embeddings. More speculatively, I also believe it is valuable to expose ML venue audiences to more nuts-and-bolts hyperbolic work, given the extensive history of cross-pollination between these fields. Claims And Evidence: The authors do an excellent job situating their work within the rich literature on embedding trees in hyperbolic space, motivating their approach in terms of empirical issues and theoretical problems (the Tammes problem), proving bounds on all of their formulas, and devising careful experiments that demonstrate their improvement over other methods. All in all, each part of their paper is clearly isolated, well-argued, and relevant to the central argument. Methods And Evaluation Criteria: The authors devise fair and minimal evaluations for demonstrating the value of their method. Their extension to phylogenetic trees is also reasonable. Given the emphasis on GPU optimization and several allusions to runtime improvements throughout the paper, I am confused by the absence of a runtime analysis in the paper. Theoretical Claims: I skimmed the proofs in Appendices D–H, but I did not scrutinize them closely. Experimental Designs Or Analyses: I closely read the description of the experiments and Appendix B and verified they are reasonable. Supplementary Material: I read appendices A and B, and skimmed appendices D-H. I did not look closely at the additional tree embeddings or FPE arithmetic algorithms. Relation To Broader Scientific Literature: The authors situate themselves in the context of tree embedding literature, which they subdivide into optimization-based and constructive methods; this method belongs to the latter, but quite reasonably the authors benchmark against all methods of both categories. They discuss the relationship of HS-DTE to Delauney tree embeddings, as well as explain how their work on HypFPE differs from other work on floating-point expansion. All in all, the authors are quite clear about where their work fits in to the existing literature. Essential References Not Discussed: The authors perform extensive benchmarking on phylogenetic trees, but neglect to cite any of the literature concerning embedding phylogenetic trees in hyperbolic space [1, 2, 3, 4]. These tend to be much more Bayesian in flavor, as is typical of the phylogenetics literature more broadly; I acknowledge that this would be difficult to do explicit comparisons with, but I think acknowledging the existence of this literature is necessary. **References** [1] Macaulay et al (2023). Fidelity of hyperbolic space for Bayesian phylogenetic inference. https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1011084. [2] Jian et al (2022). Learning Hyperbolic Embedding for Phylogenetic Tree Placement and Updates. https://pmc.ncbi.nlm.nih.gov/articles/PMC9495508/. [3] Matsumoto et al (2021). Novel metric for hyperbolic phylogenetic tree embeddings. https://pmc.ncbi.nlm.nih.gov/articles/PMC8058397/. [4] Wilson (2021). Learning phylogenetic trees as hyperbolic point configurations. https://arxiv.org/pdf/2104.11430. [5] Chen et al (2025). Variational Combinatorial Sequential Monte Carlo for Bayesian Phylogenetics in Hyperbolic Space. https://arxiv.org/abs/2501.17965. Other Strengths And Weaknesses: **Strengths:** * The paper is very coherently written, and the experiments are minimal and convincing. * The authors promise to provide Pytorch-compatible libraries for both HS-DTE and HypFPE, which will be a boon to the hyperbolic deep learning community. **Weaknesses:** * The novelty of this paper is somewhat limited: as I understand it, HS-DTE is a simple substitution of one loss function for another relative to the previous Delauney embeddings work; similarly, floating point expansion has been done in hyperbolic space previously, and the authors simply modernize it and give a more careful gloss of its role in the hyperbolic distance function. That said, this is a well-studied problem, so I expect most papers in this field to be somewhat incremental. Given the clear improvements the authors demonstrate, I am less concerned about this weakness. * The phylogenetics benchmarks are not situated in the context of other hyperbolic methods for embedding/inferring phylogenetic trees. * The authors talk about runtime, but do not provide actual benchmarks for this. Memory usage would also be interesting to look at, especially as a function of the number of FPE terms. Other Comments Or Suggestions: * The discussion of hyperbolic reflections is somewhat confusing, and I found the transition into reflections in subsection 2.1 quite abrupt. For context, I am quite familiar with the preceding math, and not at all familiar with reflections. To this end, I have three concrete suggestions: * Add a paragraph break prior to starting this discussion * Explaning the motivation for the reflections more clearly would motivate this math better. * A figure demonstrating a hyperbolic reflection in $\mathbb{P}^2$ could be helpful * Per the ICML style guide, table captions should appear above the tables. * It is unclear what the horizontal lines in Tables 2 and 3 mean * The running title of the paper still reads "Submission and Formatting Instructions for ICML 2025" Questions For Authors: * Can you explain the "elbow" in Figure 1(b)? Is this surprising, or is it explained by some theoretical aspect of FPE? * I found it very surprising that it is a difficult problem to distribute $n$ points nicely on $\mathbb{S}^d$. Can you provide an intuitive explanation for why this is the case? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback regarding the writing, motivation and results. Below we address the points and questions raised by the reviewer. **Runtime comparison and memory usage.** Given the similar theoretical complexities, we consider benchmarking this to be an interesting and important direction for future work. However, currently, such a runtime comparison cannot be performed fairly, since (Sala et al., 2018) perform arbitrary precision (AP) arithmetic in Julia, which uses the heavily optimized GNU MPA and GNU MPFR libraries, written in C, which are the result of well over two decades of development. Our FPE library is a first version Python library, that still requires extensive optimization. So, before we can make a fair comparison, an optimized CUDA version of our library should first be developed. We expect that, once such a library exists, the runtime of AP arithmetic versus FPE arithmetic will be similar for small tensors, while for large tensors the FPE arithmetic will likely be significantly faster due to its ability to leverage the benefits of accelerated hardware. Regarding the memory usage, the main difference between FPE arithmetic and AP arithmetic is that each term of an FPE contains its own exponent (11 bits per float64), whereas an AP floating point number has a single exponent (64 bits in Julia). Therefore, adding bits of precision is slightly less efficient for FPEs than for AP arithmetic. However, this difference is marginal. **Reference to literature on the embedding of phylogenetic trees.** We thank the reviewer for pointing out the papers on the embedding of phylogenetic trees. We have added references to these papers in the manuscript. **Clarification on hyperbolic reflections.** We thank the reviewer for their suggestions for further clarification on hyperbolic reflections. Each of these will be incorporated into the manuscript, including a figure to show some simple examples of such reflections in $\mathbb{D}^2$. The motivation for reflections lies within the fact that these are isometries of hyperbolic space, which means that we can safely use these to move our embeddings around without changing the distortion. This fact is not trivial and is a result that is sometimes proved in books on hyperbolic geometry such as in (Anderson, 2005). A reference to this result should have been in the manuscript and we have since added this to Appendix A alongside a clear explanation of the motivation and interpretation. **Explanation behind elbow behaviour in Figure 1(b).** This is interesting behaviour that results from increasing the scaling factor $\tau$ within the construction itself. Note that, as we increase the precision, we increase the scaling factor $\tau$, since a greater $\tau$ is always beneficial as long as it does not lead to numerical problems. For smaller values of $\tau$, the method cannot make good use of the curvature and, as a result, subtrees will tend to overlap as we iteratively embed a tree, leading to massive distortion. As we increase the scaling factor $\tau$, there comes a point where the subtrees stop overlapping, causing the worst-case distortion to quickly drop to near 1. Past this threshold, increasing $\tau$ will only marginally decrease distortion compared to the initial drop, as the distortion will already be very low. We will include this explanation to the paper. **Intuitive explanation behind the difficulty of the Tammes problem.** This is a very surprising problem as the problem statement itself is deceptively simple. Giving an intuitive explanation of why exactly this problem is so difficult is not particularly easy itself. Probably, the most straightforward way of getting a feeling for the difficulty of this problem is by manually attempting to place points on the sphere $S^2$. Placing a small number of points remains fairly simple (for example through the use of regular polyhedra), but as the number of points increases, this quickly becomes very difficult. In fact, for 15 points, there is already no known optimal solution [1]. This problem has sparked enormous amounts of research and has been found to have applications in many different areas of physics, computer science, and other scientific fields. For further information we refer the reviewer to a book on spherical coding such as [2]. [1] D. A. Kottwitz, The densest packing of equal circles on a sphere, Acta Cryst. Sect. A 47 (1991), 158–165. [2] Conway, John Horton, and Neil James Alexander Sloane. Sphere packings, lattices and groups. Vol. 290. Springer Science and Business Media, 2013. **Other comments and suggestions.** We have incorporated these suggestions into the manuscript. Thank you. We hope to have adequately addressed the reviewers concerns and to have answered their questions. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful answers to my review. I have no further questions, and am keeping my score at a 4.
Summary: This paper takes on the task of transforming into an algorithm some mathematical results about low-distortion embeddability of any metric tree into hyperbolic space, whose existence and non-quantitative construction was known by work of Sarkar. This endeavor leads to two new challenges. (1) the tree embeddings can become very large (unbounded diameter). Since the Poincare disk model sends the infinite hyperbolic space to a bounded chart, this means that unbounded numerical precision has to be used. Thus the authors present encodings via floating-point formal sums, that fit well with GPU processing and allow arbitrary precision (2) at vertices at which the metric tree has high degree N, for good efficiency (i.e. to get low distortion) we would impose that the N branches have to go in as well separated as possible directions, as measured in the tangent space to hyperbolic space at that point.. thus since in the tangent space is euclidean, we get the problem of distributing N points on a sphere, as uniformly as possible. This is in itself a well studied problem with many possible versions, but (a) one version has to be chosen and (b) it needs to be fit within the framework of hyperbolic space embeddings of trees. Here for (a) the authors choose to implement an energy depending on angle separations, and they proceed to do (b) in the natural way one would expect. Claims And Evidence: I think that the claims about the algorithm working better for tree embeddings is warranted, both by the theoretical and by the quantitative comparison with other embeddings, so I don't have much to say on that. However I don't follow the reasoning about this embedding being useful for deep learning frameworks, as hinted at by the passages in the introduction in the first page of the paper. (In fact the authors mention this limitation explicitly in lines 247-248, first column, but they don't propose any solution.) So I don't see how the floating point expansion from Def. 4.1 can be easily fit into a neural network processing. If the whole goal is just to embed a known tree into hyperbolic space, OK. But in practice for HNN applications one does not know the correct tree, and that has to be learned by a neural network or some other mechanism. The compatibility of the floating point expansion with Deep Learning frameworks has not been discussed nor tested. So the claims about usefulness in machine learning remain dubious to me. Methods And Evaluation Criteria: I think that yes, they do. Theoretical Claims: Yes, I checked all the details and didn't find anything wrong. Experimental Designs Or Analyses: I think that the analyses match closely the theoretical part, and are not the most important part of the paper. However, they are sound for me. Supplementary Material: Yes, I reviewed it all. Relation To Broader Scientific Literature: As said before, I think that this paper relates as a practical counterpart to the mathematical theory of embedding trees into hyperbolic space, however it does not connect with the most influential part of Hyperbolic Machine Learning, due to the presence of floating point expansions and due to not proposing a way to include this into hyperbolic machine learning pipelines. Essential References Not Discussed: I think that some reference to the book of Borodachov-Hardin-Saff would be useful. Also work by Carlos Beltran and collaborators on energy minimization over spheres can be interesting to mention. https://arxiv.org/abs/1703.00416. Perhaps taking points randomly from a determinantal point process can be another way to build good enough embeddings for other choices of energies? Other Strengths And Weaknesses: I covered all the important points in other sections. Other Comments Or Suggestions: Line 133-134 second column: "strong embeddings" means what precisely? do you mean less distorted? And "numerical issues" means precisely what? It'd be good to be specific about what you refer to. Line 136 second column: I think that "deg_{max}" would be better as "maxdeg", if the underlying language of reference is English. If you follow French or Italian then go with "deg_max" but that seems odd. Line 145 second column: "should be \tau= \Omega(1)" what do you mean by "should" exactly? should be that in order to do what? Line 154 firlst column "However" means what, i.e. what's the problem with the methodology of that library? Be precise please. In step 5 of Algorithm 1, what is R_{\phi(v)\to 0} ? this notation is not introduced before. Line 189-190 second column: "We find that this method results in strong separation when compared to highly specialized existing methods [..]" can you explain how you measured that? What's the comparison and to what do you compare it? In the statement of Thm. 3.1 you talk about "number of optimizations" what does that mean? how is that defined from eq. (9) exactly? At the end of page 4, you say "MAM is easily optimizable", what does that mean? Does it have no local minima? Or what does it mean "easily" and what would "hardly" mean? Line 223 first column: "can be further optimized", so why was it not optimized? maybe write "could be optimized"? About the description of formula (13), in the preceding paragraph: "unevaluated sums" does that mean "collections of terms" or what does it mean exactly? maybe represent that as tuples of floating point numbers, and then use formula (13) just for the interpretation. This is because "unevaluated sums" seems odd and to me unnecessarily hard to imagine. Questions For Authors: 1) Do you know of actual separation guarantees that allow to compare the theoretical results of separation obtained from your algorithm for mininmizing (13) vs the previous works? In other words, in theory, what's the improvement, how can that be measured? 2) I reiterate on the question of NN processing compatibility of your floating point expansions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful and constructive feedback. Below, we describe how the feedback has been incorporated into the paper and address the reviewer's concerns. **The importance of embedding given hierarchies in deep learning.** Many works have shown the strong potential of deep learning with hyperbolic embeddings. All these works assume that hierarchical information is known *a priori*. Such knowledge is important for deep learning and can be incorporated into the model (as prototypes for example), the loss function, or several other means. The most intuitive examples of such hierarchies can be found in biology, where tasks such as classification can greatly benefit from knowledge regarding the evolution of species, typically given through animal taxonomies or phylogenetic trees [1, 2]. Other examples can be found in many different areas, such as biochemistry [3], medicine (Yu et al., 2022b), action recognition (Long et al., 2020), commonly used classification datasets such as ImageNet [4] and many more. In general, these works show that hierarchical information benefits deep learning, especially for hierarchical classification. There are various ways in which such a hierarchy can be leveraged to improve performance, such as through prototype learning. Our embedding approach can be directly used in deep learning in the same way. For all deep learning approaches, the better the embedding, the better the corresponding prototype distribution, benefitting deep learning. This is shown in (Liu et al., 2020; Long et al., 2020; Yu et al., 2022b). We apologize for not making this case clear enough and we will update the discussion in the introduction accordingly. ~ [1] Stevens, Samuel, et al. "Bioclip: A vision foundation model for the tree of life." CVPR. 2024. [2] Chen, Jingzhou, et al. "Label relation graphs enhanced hierarchical residual network for hierarchical multi-granularity classification." CVPR. 2022. [3] Yazdani-Jahromi, Mehdi, et al. "HELM: Hierarchical Encoding for mRNA Language Modeling." ICLR. 2025. [4] Chatterjee, Ankita, Jayanta Mukherjee, and Partha Pratim Das. "ImageNet classification using wordnet hierarchy." IEEE TAI 5.4 (2023): 1718-1727. **Fitting floating point expansions into neural networks.** Integrating floating point expansions into existing deep learning libraries is an exciting next step, although it is indeed not completely trivial. Our statement in lines 247-248 is about the incompatibility of arbitrary precision arithmetic, since this incompatibility is at the hardware level. On the other hand, FPE arithmetic does allow for the use of GPUs, but requires new routines for additional functions and, often, for their derivatives in order to work with the usual deep learning pipelines. More specifically, to incorporate FPE arithmetic into existing hyperbolic learning methodology, we would have to implement the forward and backward passes of each operation involving the hyperbolic space (where the added precision is warranted) in FPE arithmetic, similar to what we did with the computation of the distance function in this work. On top of the additional required theory, the new architectures that incorporate these embeddings would have to be designed, optimized and tested. Because of this, we consider this to be outside the scope of the current paper, but a very interesting and necessary direction that we intend to explore in future work. **Additional references.** We thank the reviewer for the additional references and have added these to the paper. **Theoretical comparison between our method and existing methods in terms of separation.** There are a few specific settings in which optimal solutions are known. In some of those cases we have compared the optimal result to ours and found very similar separation. However, ours has the benefit of being always applicable, whereas the table of best known solutions (Cohn, 2024) has many missing entries. Combining both could lead to a small improvement, but due to the marginal increase in separation between our results and the best known results, the decrease in distortion will also be marginal. **Other comments and suggestions.** We have incorporated each of the reviewer's comments and suggestions into the text. Due to the character limit we cannot address each point individually here. If the reviewer would like further clarification on any point in particular, then we would be happy to address these in the next response. We hope to have addressed the reviewer's concerns and that they are convinced of the potential of our method for the hyperbolic learning community. --- Rebuttal Comment 1.1: Comment: I have read the other reviews and their rebuttals, and the rebuttal to my review. - I liked a lot the review of gaL1, which included nice references on phylogenetic trees, some of which I dindn't know and want to read soon. I agree that phylogenetic tree analysis is a good direction of application for the current work, although I don't think that this would be necessarily done by ML methods. I didn't find their questioning on novelty addressed in the rebuttal to their review, but agree with them that it is minor. - About the review of uxsR, I share the admiration for the implementation side and I consider it too a strong paper in general, and I have exactly the same concern about not suitability to the ICML community, since the paper does not work in a ML framework and requires exciting new work if we want to plug it into a ML pipeline. I feel that similarly to my review's rebuttal, their rebuttal does not address the concern accurately. This is in consonance with my points A. and B. below, which highlight points not addressed in the rebuttal. My view is that the paper in which point B. will be covered will have strong impact in conferences such as ICML, but at the moment we are not yet there. ----------------About my own review's rebuttal: ------------ A. About "All these works assume that hierarchical information is known a priori." Sorry, I strongly disagree. The whole point of most classical HNNs is to find out a good candidate for the hierarchy, and often the goal is just to process data with a hidden hierarchy in a way that respects that structure, without necessarily embedding it explicitly. Most works that use the hyperbolic metric that I know, use the metric precisely because the user knows that there is some hierarchical structure, but this structure is not known a priori. One of the main benefits of hyperbolic space is that it gives a continuous search space as a substitute to the combinatiorial space of all metric trees. If we were to search for the metric tree that best fits a given distance matrix, the search space faces a combinatiorial explosion, as the number of non isomorphic combinatorial trees with N leaves grows exponentially in N. Instead, hyperbolic embeddings plug the N leaves in hyperbolic space with low deformation and optimizes this data in the natural geometry, sometimes selecting a good candidate for almost-optimal embedding, but without fixing the hierarchy a priori. Your work fixes the hierarchy a priori and then embeds it into hyperbolic space. This does not allow to circumvent the combinatorial explosion of the space of trees mentioned above. This is why it is unfair to compare your work to the ones in which the hierarchy is not assumed known a priori. A few examples about the hierarchy not being known a priori: 1) the "Hyperbolic neural network" paper by Ganea et al. uses as experiments some random perturbation of strings. Then they validate the hyperbolic embedding's tendency of selecting good hierarchical relations. 2) "Poincare embeddings for learning hierarchical representations" -- the title says it already: hierarchical structures are learned, not known/fixed and then embedded. 3) "Hyperbolic entailment cones for learning hierarchical embeddings" -- the title says it, as before. 4) "From Trees to Continuous Embeddings and Back: Hyperbolic Hierarchical Clustering" -- the goal is to use hyperbolic space as a continuous search space for tree structures. This means that the task does not assume a known tree structure. 5) "Hyperbolic Busemann Learning with Ideal Prototypes" -- again, the setting is an 1 or 2, with the same comment Note that all these except for 1) are taken from the beginning of your reference list, they are classical papers on hyperbolic neural networks. B. I agree that adapting floating point expansion to deep learning in actual computations is exciting and outside the scope, thanks for the discussion. C. When I asked for theoretical comparison on separation I meant provable results that describe the comparison at a theoretical level Empirical validations are OK but there are many ingredients in these that can vary and modify the outcome, therefore I don't find them equally compelling. Can you please either provide information about the theoretical comparison, or affirm that this is outside the scope of the paper? I thank the authors for their responses and for the moment I'll keep my score as is, since I don't see my above concerns A. and B. addressed, similarly to those of reviewer uxsR mentioned below. --- Reply to Comment 1.1.1: Comment: We are glad to hear that the reviewer agrees on point B regarding the application of FPEs to deep learning. Aside from their application to deep learning, FPEs can be used to improve hyperbolic representations in all current applications, so we do consider this a relevant contribution nonetheless. For point C, we would also be very interested in such an analysis. Unfortunately, the analyses go beyond the current understanding of the Tammes problem in mathematics, hence the theoretical comparisons are currently not viable and we have to fall back on empirical comparisons. **This leaves point A as the remaining point. We agree that not all hyperbolic learning papers assume the a priori availability of hierarchical knowledge. However, it is a common starting point in a lot of machine learning papers, including multiple of the papers mentioned by the reviewer. Out of the 5 papers listed by the reviewers 3 actually do assume hierarchies as prior knowledge. This is a crucial point.** Our goal is to embed a (symbolic) tree-like structure into a continuous hyperbolic embedding space. The Poincaré Embeddings (NeurIPS) and Hyperbolic Entailment Cones (ICML) papers have the exact same goals as our method and are also baselines in our paper: 1. "Poincaré embeddings for learning hierarchical representations": In this paper, some set of nouns is embedded in hyperbolic space. The loss that is used (equation 6 of the paper) uses the set $\mathcal{D}$, which is the set of hypernymy relations between these noun pairs. In other words, the set of nouns are the vertices and the set $\mathcal{D}$ the edges between these vertices. Thus, the loss explicitly assumes a given graph that is directly used to embed the nodes of the graph into hyperbolic space. So, yes, this paper does assume a given hierarchy. 2. "Hyperbolic entailment cones for learning hierarchical embeddings": Same as for Poincaré embeddings, the loss function (equation 32 in the paper) explicitly uses a hierarchy that is known *a priori* in the form of hypernym links. This is, again, simply a set of edges on some set of nouns, which together form a graph. This graph is then sampled for nouns that are directly related and ones that are not (sets $P$ and $N$ in the equation). Other papers with this exact same setup for embedding a given tree-like structure include (Sala et al., ICML 2018) and (Yu et al., MICCAI 2022). Second, there is a wealth of papers that tackle a down-stream deep learning task with a hyperbolic embedding space on top of a neural network. Many of these papers assume that for their classification task, a hierarchy is available a priori that spans all classes. The hierarchy is then incorporated into the model through various methods such as embedding the hierarchy into hyperbolic space and using these as prototypes. Examples include, but are not limited to, (Liu et al., 2020; Long et al., 2020; Ghadimi Atigh et al., 2021; Ghadimi Atigh et al., 2022; Yu et al., 2022b). It even includes the Hyperbolic Busemann Learning paper mentioned by the reviewer, which includes settings where hierarchical prior information can be injected, see for example Tables 5 and 6 of that paper. The reviewer is right that not all papers assume hierarchies, e.g., Hyperbolic Neural Networks focus on re-interpreting core neural network layers in hyperbolic space and are not focused on hierarchical embeddings. Papers such as [1] try to infer hierarchies from hyperbolic embeddings. **In conclusion, our setup is a standard setup in hyperbolic machine learning literature.** We show that in this setup, we provide the best solution and the best path forward in hyperbolic learning with known hierarchies. It would be heartbreaking for us if the paper is rejected on this premise, since it is simply not true that hyperbolic learning papers never assume hierarchies. We hope to have convinced the reviewer of the validity of our setup and the value of our contribution. [1] Nickel, Maximillian, and Douwe Kiela. "Learning continuous hierarchies in the Lorentz model of hyperbolic geometry." ICML. PMLR, 2018.
Summary: This paper proposes a construction based tree embedding method in hyperbolic space. Claims And Evidence: "While these approaches are flexible due to minimal assumptions, the optimization can be unstable, slow and result in heavily distorted embeddings" This is not true, many hyperbolic embeddings achive great results, e.g., in NLP, knowledge graphs, GNNs. It would be great to show more evidences for the claim "...result in heavily distorted embeddings". Methods And Evaluation Criteria: This paper focus on tree embeddings, and only evaluate the model on small and synthetic tree data. To demonstrate its applications to real-world cases, it should also be evaluated on real tree-like (but not eactly tree) datasets, such as on tasks of hierarchical graph (WordNet) embeddings, knowledge graphs, etc. Another reasoning to consider real data is to compare with learning based hyperbolic embeddings used in KG embeddings, GNNs, etc. Theoretical Claims: I read the theorem, but not check the proof. Experimental Designs Or Analyses: The experimental designs on synthetic datasets make senses, but should also be evaluated on real tree-like datasets. Otherwise, it is not convincing to claim that construction based methods are better than learning based methods. Supplementary Material: I did not check appendix. Relation To Broader Scientific Literature: This work might be related to many applications involving hierarchical data. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1) I like the paper, the writing of the paper is great. Motivation is also clear. 2) The theorems provide some nice guarantees that are useful for real applciations (and the other methods do not have). 3) Ilustration of the results and findings (figures and tables) are also clear. Weaknesses: 1) Comparision w.r.t efficiency is missing 2) Lack of evaluation on real-world datasets and ML tasks 3) Lack of conceptual description of the proposed idea. I suggest to add a figure 1 in the intro, to summerize the main idea of the paper. Other Comments Or Suggestions: See Weaknesses Questions For Authors: Does the proposed method only work for trees? or it can be extended to DAG, or any tree-like data like hierarchies but with a few cycles? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback on the writing and clarity, and we are glad to hear that they like the paper. The reviewer points out four points of discussion and suggestions: the performance of optimization-based methods, experimental results on real-world data, comparison w.r.t. efficiency and a figure 1 for summarizing the approach. We address each of these below. **Performance of optimization-based methods.** The approaches we mention regarding distorted embeddings are about the optimization-based approaches Poincaré Embeddings and Hyperbolic Entailment Cones specifically. We will clarify our statement to avoid any confusion with hyperbolic embeddings in general. There are several ways of measuring the quality of embeddings. For some of these, such as the MAP and F1 score, the optimization-based methods indeed perform well. This is in line with their intended downstream tasks which often involve link prediction. However, on distance-based metrics, these methods tend to perform significantly worse, as shown for instance by (Sala et al., 2018) and by our results. On the other hand, the constructive based approaches perform well on both types of metrics, while being significantly faster and without requiring hyperparameter tuning. We will add the point on quality with respect to metrics to the discussion. **Experimental results on real-world data and tree-like data.** We fully agree with the reviewer, which is why we have included results on the embedding of real-world phylogenetic trees in Subsection 5.3 and tree-like graphs in Appendix M of the manuscript. Even on such real-world data, we find that our method has the strongest performance. Our method can be used to embed graphs in the same manner employed by (Sala et al., 2018). We hope the reviewer will find these results as convincing as we do. **Efficiency comparison.** We believe comparisons can be made at two levels. The first is a comparison in efficiency between HS-DTE and the constructive approaches by (Sala et al., 2018). Here, the difference in efficiency comes from the difference in generating points on the hypersphere in step 6 of Algorithm 1. This generation for our method takes mere seconds and, according to Theorem 3.1, only has to be performed $\mathcal{O}(\sqrt{N})$ times in the worst-case (much less in practice). So, in our experiments we find a minimal increase (only seconds) in computation time with respect to (Sala et al., 2018). The second is the efficiency of floating point expansion arithmetic compared to arbitrary precision arithmetic. Here, we have referred to the theoretical guarantees described for instance in (Popescu, 2017), which tell us that the complexities between the two types of arithmetic are similar. Testing this in practice is an important future direction. Currently, this cannot be done fairly as we would be comparing our first version Python implementation of FPE arithmetic against the heavily optimized arbitrary precision arithmetic that is used by Julia. More specifically, Julia uses the heavily optimized GNU MPA and GNU MPFR libraries, written in C, which have been in development for well over two decades with the sole purpose of building highly optimized and easily applicable arbitrary precision arithmetic. To compare fairly against such libraries, an optimized CUDA version of our library should be developed. We believe this is an exciting future research direction. We expect that an optimized version for FPE arithmetic will significantly outperform arbitrary precision arithmetic libraries, especially for large tensors, as our approach can rely on accelerated hardware. We believe more attention should be brought to FPEs through papers such as this, as that will hopefuly help speed up the development of such libraries. We will include the discussion on efficiency and open research opportunities to the paper. **Addition of a figure 1 to summarize the method.** We agree with reviewer and thank them for the suggestion. We have added a Figure 1 to the manuscript. We thank the reviewer for their helpful feedback and hope that we have addressed their concerns.
null
null
null
null
null
null
Statistical Collusion by Collectives on Learning Platforms
Accept (oral)
Summary: This paper introduces a framework in which a group of users, or “collective,” can statistically coordinate modifications to their data to influence a platform’s learning algorithm. The authors define several types of objectives—“signal planting,” “signal unplanting,” and “signal erasing”—and provide theoretical guarantees on how effectively a collective can achieve these objectives. By modeling the platform as choosing an ε-suboptimal classifier, the work derives lower bounds on the collective’s success under finite-sample conditions. The paper proposes practical strategies for modifying data, shows how a collective can estimate crucial distributional parameters from its own pooled samples, and presents empirical results on a synthetic car-rating dataset. Overall, the paper contributes new algorithmic tools and tighter theoretical bounds that extend and refine earlier studies on collective action in machine learning. Claims And Evidence: Overall, the paper’s theoretical claims—namely that finite-sample data poisoning strategies can guarantee lower bounds for signal planting, unplanting, and erasing—are backed by proofs and validated with synthetic experiments. However, some broader practical claims are less firmly grounded: Real-World Applicability: The paper relies on a synthetic car-rating dataset to illustrate the theory. While it demonstrates the feasibility of the strategies in a controlled setting, it does not provide equally detailed evidence that these methods scale or remain effective in complex, real-world data environments. Platform Defenses: Although the paper discusses how a platform chooses an ε-suboptimal classifier based on the poisoned data, it does not deeply explore realistic detection or mitigation strategies. Consequently, any claim implying that these results readily extend to platforms with active defenses remains under-supported. These limitations do not undermine the core theoretical findings, but they do mean that some broader statements regarding practical deployment would benefit from additional evidence or real-world trials to be wholly convincing. Methods And Evaluation Criteria: Yes. The paper’s methods—particularly the definitions of success in “signal planting,” “unplanting,” and “erasing”—align directly with the theoretical goals, and the synthetic car-rating dataset provides a controlled way to demonstrate how these objectives can be measured in practice. While real-world datasets might introduce more complexity, the chosen evaluation still makes sense as a proof-of-concept for the proposed finite-sample bounds and strategies. Theoretical Claims: The arguments appear sound, and I did not spot errors in the manipulations or the final bounds. The stepwise presentation (outlining each concentration term, then taking a union bound) is internally consistent and aligns with standard proof techniques in learning theory. Experimental Designs Or Analyses: I looked at how the synthetic car-rating dataset was generated and how success was measured for the different poisoning strategies (signal planting, unplanting, erasing). The experimental design aligns well with the theoretical framework—each experiment straightforwardly tests whether the derived lower bounds match observed outcomes on the synthetic data. While the dataset is simplified and may not fully capture real-world complexity, I did not see flaws in how the experiments were conducted or analyzed relative to the stated objectives. Supplementary Material: NA Relation To Broader Scientific Literature: Compared to classic data poisoning, which often involves a single adversary, this work extends “collective action” ideas—originally from Hardt et al. (2023)—to finite-sample settings. It refines how groups of individuals coordinate data modifications (e.g., “signal planting,” “unplanting,” “erasing”) and integrates statistical estimation techniques so that the collective can learn optimal poisoning strategies from its own local data. This bridges backdoor and adversarial ML with fundamental estimation theory, yielding tighter and more broadly applicable bounds. Essential References Not Discussed: NA Other Strengths And Weaknesses: Na Other Comments Or Suggestions: Na Questions For Authors: Na Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time and appreciate their positive feedback on our paper.
Summary: The paper proposes a framework to quantify the outcome of collective actions in machine learning, performed through the coordinated submission of altered data. In particular, the proposed framework allows the derivation of lower bounds on the success of the collective's strategy. The authors advocate for strategies that will be available in practice for a collective. They also advocate for the autonomy of the collective in estimating the success of a strategy based on information available to the collective. The framework focuses on three objectives, namely *signal planting*, *signal erasing*, and *signal unplanting* (newly introduced by the authors), and proposes efficient strategies to implement them, as well as lower bounds on their success. For the experimental aspect of the work, the authors focused on *signal planting* and *signal unplanting*. Experiments conducted on synthetic data confirm previous observations that the success of the collective depends on its relative size but also highlight the critical role of the collective's absolute size. The results show a small gap between the effective success rate and the theoretical lower bounds proposed. The authors also show that their lower bound compares favorably to Hardt et al. (2023) for signal planting. Claims And Evidence: The two main claims of the paper are: 1. The ability of the proposed framework to assist in estimating the most effective strategy. 2. The ability to provide lower bounds on success, with parameters controlled by the collective. Arguments supporting both claims are well presented in the paper and backed by experimental results, except in the case of *signal erasing*, for which no experimental evaluation is provided. Methods And Evaluation Criteria: The authors use Hoeffding’s inequality to derive the lower bounds and conduct experiments on synthetic data to evaluate their usefulness. They select an appropriate baseline (Hardt et al., 2023) for comparison. Overall, the methods and evaluation approach appear reasonable. The authors also discuss the possibility of employing other concentration inequalities to derive tighter bounds. Theoretical Claims: I reviewed Appendix E to better understand how the lower bounds are derived. However, I might not have fully understood certain aspects of the proofs presented in Appendix E. Experimental Designs Or Analyses: I have reviewed all the experiments. Please refer to the section "Other Strengths and Weaknesses" for detailed comments on the experiments. Supplementary Material: Yes, I have reviewed all of the supplementary material. However, I might not have fully understood certain aspects of the proofs presented in Appendices E and F. Additionally, I reviewed the provided code to obtain further details on data generation, the experimental setup, and the implementation of signal erasing. However, this review did not fully address my concerns, prompting the questions listed in the *Questions for Authors* section. Relation To Broader Scientific Literature: This paper contributes broadly to discussions on how affected parties can take a more active role in the implementation of responsible AI. It is closely related to Hardt et al. (2023) and advocates for estimating lower bounds on a strategy's success using information accessible to the collective. Thus, the proposed framework can be viewed as a step toward enhancing the autonomy of collectives. Essential References Not Discussed: I think the authors provide an excellent coverage of the existing works. Other Strengths And Weaknesses: I found the research question of estimating the success rate of collective action based on pooled data very interesting and original. This paper contributes meaningfully to understanding how collective action can be implemented in practice. Overall, the paper is very well written. Each objective and assumption is clearly defined, and the corresponding strategies and lower bounds on success are explicitly presented. The authors also provide intuitive explanations of the strategies and appropriately discuss the limitations associated with some key assumptions. My main concerns relate to the experiments and their realism. The data-generation process is not clearly described; for instance, details about the relationship between features and labels are missing. Additionally, it is unclear why only a single configuration for the parameter $\epsilon$ was chosen. Other Comments Or Suggestions: - Line 159. The second probability should be population probability - Line 174. "Fo each feature" -> "For each feature" - Line 403: y∗ ∈ {Good, Average, Poor} -> y∗ ∈ {Good (G), Average (A), Poor (P)} **if possible:** - Propose an alternative notation for $n_{est}$ as it can lead to confusion with $N_{test}$ - Provide concrete examples illustrating each of the three objectives, similar to those given in Hardt et al. (2023) for signal planting and signal erasing (e.g., content creators). Ideally, introducing a running example early, when the objectives are first defined, would help readers better appreciate the nuances between scenarios. - Line 114: wrap the assumption on collective access to $N$ under Assumption definition (e.g., **Assumption ($A_0$)**) Questions For Authors: - Did you conduct any experiments for configurations with $\epsilon > 0$? If so, could you describe the general trend observed? - In practice, what performance did you observe for the lower bound on the success of *signal erasing*? - Line 590: "Additionally, each car is assigned a Car Evaluation label based on a scoring system that considers various factors such as safety rating, fuel type, warranty length, and others" Could you please provide more details on the scoring system used? - Were experiments repeated multiple times for each fixed value of $n$ (for example, in Fig. 1)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed and constructive review, and their overall positive feedback. We also appreciate them pointing out the typos. We will update $n_{est}$ to $n_{estim}$​, add assumption $A_0$ regarding the collective's access to $N$, and include a running example if it is considered helpful. We will also make sure to include details about the relationship between features and labels in the synthetic dataset. **Concerning the synthetic dataset, which has also been mentioned by other reviewers**: our paper provides finite sample guarantees for collective action in classification, and the bounds we derive depend on several parameters. The main purpose of our experiments is to illustrate the bounds we obtained and to gain insights into the influence of the various parameters. We considered it more relevant to use a synthetic dataset to better understand the influence of these parameters. For example, we can deliberately manage the label imbalance (see our response to Reviewer CB4v) and clearly demonstrate that the adaptive strategy outperforms the naive strategies in Figure 3, even outperforming the naive strategy based on the second most frequent global label after $y^*$ (see Figure 3 and Table 1). Furthermore, for a fixed $g$, we can investigate how the composition of the training set elements within the signal set impacts the results using a synthetic dataset, but not with a real-world dataset. In ongoing work we will be working with real-world datasets, but for a first study of this methodology we felt that the priority was to understand its properties rather than simply exhibit an application which would give limited insight into whether our conceptual understanding of the parameters is on solid ground. *Experiments for configurations with $\epsilon > 0$* Yes, the lower bounds will become weaker as $\epsilon$ increases. *Lower bound on the success of signal erasing* The lower bounds are trivial (equal to zero) because the estimation terms are too large, as they depend on the cardinality of $\mathcal{X}$. We briefly mentioned this in the appendix (lines 1104-1105) but we can also include it in the main text if it is considered helpful. *Experiments repeated multiple times* We ran the experiments only once. We expect little variance, except for values very close to the steps, which do not affect the overall trend of the curves highlighted in the paper. However, we can still run additional experiments and average the results if deemed helpful. We thank the reviewer again for their valuable feedback.
Summary: This paper addresses the problem of collective action under finite samples. There are $N$ consumers each with a datapoint $(x,y)\sim \mathcal{D}$ where $x\in \mathcal{X}$ and $y\in \mathcal{Y}$. Of these $N$ consumers $n$ consumers plan to collude, so that after learning from the $N-n$ clean samples and $n$ corrupted samples, the value of a metric $S$ on a test set $D_{test}$ of size $N_{test}$ is large. Previous work (Hardt et al 2023) established lower bounds on $S$ for $n, N\to \infty$ with the ratio $\alpha = \frac{n}{N}$ being constant. In this paper, the authors try to obtain lower bounds for finite $n$ and $N$. They aim to obtain lower bounds on $S$ that can be computed by the $n$ consumers before collusion, so that they can guarantee success of collusion, i.e., large $S$. The authors consider $3$ problems i) Signal planting, ii) Signal Unplanting and iii) Signal Erasure. Here, signal planting and erasure had been introduced by (Hardt et al 2023). In each of these problems, there is a signal set $\tilde{\mathcal{X}}$ defined as the range of a map $g:\mathcal{X} \to \mathcal{X}$. In signal planting, and unplanting, the goal is to add/remove a specific label $y^\star$ for all features in the signal set $\tilde{\mathcal{X}}$. In the signal erasure, the goal is to learn a predictor that has same output on $g(x)$ and $x$, thereby completely eroding the impact of $g$ and thus the signal set. For each of these problems, the authors have atleast one strategy defined by a specific transformation $h:\mathcal{X}\times \mathcal{Y} \to \mathcal{X}\times\mathcal{Y}$, that transforms the corrupted datapoints. The main theoretical claims of the paper are that a lower bound on the empirical metric $\hat{S}$ for these strategies can be computed efficiently from only the original data of the $n$ corrupted consumers (See Algorithms 1-4). The corresponding finite-sample lower bounds are provided in Theorems 3.3, 3.5, 3.7 and 3.9. Further, the authors test their signal planting/unplanting and erasure bounds on a large synthetic dataset of cars with categorical features. They show small deviations between the empirical quantities and their bounds. Further, their methods are more accurate than (Hardt et al 2023) in predicting the change in metric. **References** - (Ben-Dov et al 2024) The Role of Learning Algorithms in Collective Action. ICML. Claims And Evidence: Most claims in the submission are well supported. Here are the main ones. 1. The theorems in Section 3 provide finite-sample easy to compute lower bounds allowing on the metric $S$ thus yielding Algorithms 1-4. 2. Dependence of success metric not just on corruption ratio $\frac{n}{N}$ but also the absolute value of $n$: A nice explanation of this is provided in Appendix E.5 and Figure 7 in addition to Figure 2. The authors might consider moving some parts of this section from appendix to the main paper. Note that this was the only experiment related to signal erasure. 3. Experiments on signal planting/unplanting to check the tightness of the lower bounds from theorems were provided in Figures 1 and 3. 4. Better empirical performance than (Hardt et al 2023) in Figure 4 for large number of samples, i.e., population limit. The authors claim that the metric $S$ increases in a staircase-fashion as $\frac{n}{N}$ increases. For this example, this staircase is visible with $3-4$ stairs however, that is not enough evidence to claim existence of such a property. Either more experiments on different datasets/settings should be done or adequate theory provided for such staircase-property before such claims could be made. 5. Absence of experiments on signal erasure task: The paper would have been more complete with these experiments. Methods And Evaluation Criteria: - **Dataset not real** While all experiments are performed on a single large sythetic dataset with each datapoint being a car and its condition (one of $4$ conditions). The feature vector of a car correspond to its different parts, and the condition of the car is calculated as a score of these features. While the experiments on this dataset are complete (except signal erasure task), no real dataset have been used. This is a bit surprising considering several appropriate experimental setups for this problem in (Hardt et al 2023, Ben-Dov et al 2024). Note that I don't expect the authors to run any additional experiments in the review period, but in case of acceptance, they should add atleast $1$ task on a real dataset. - **Synthetic Dataset with Label Imbalance**: The other issue with this dataset is that it appears to be synthetic, but it has qualities of a real dataset, for instance label imbalance between "Good" and "Average" labels. If this was truly a synthetic dataset, then the authors should have controlled the label imbalance so that it does not affect the results. Can the authors verify if this dataset is truly synthetic and the authors purposefully put in this imbalance or if some parts of it have been obtained from real datasets which would explain this label imbalance? Note that real datasets are generally harder than synthetic ones. This label imbalance leads to large discrepancies in the $2$ subfigures on the right in Figure 5. - **Reproducibility**: The authors didn't mention the number of random seeds, the amounnt of compute resources or any variances in their experiments. While these are not extremely important to the main result, they are important for reproducibility. I would recommend including this in the final version in case of acceptance. Theoretical Claims: - **Correctness** : The proofs seem correct to me. The core idea that the authors use is this, first using Hoeffding's inequality, they bound the difference between population and finite-sample terms. Then, they take an appropriate union bounds over all these concentration bounds so that they hold simultaneously. After this, all they need is to plug in the expression of the metric $S$ and use the given strategy and the concentration inequalities, until they end up with terms only dependent on datapoints on $n$ consumers. - **Justification of Assumptions 1 and 2**: From what I understand in (Hardt et al 2023), these assumptions are not explicitly used. The value $\tau$ in (Hardt et al 2023) seems similar to $\eta$ in Assumption 1, although it is required for every feature in $\tilde{\mathcal{X}}$, while $\tau$ is defined only on expectation wrt the true distribution. I understand the authors' justification for these assumptions, but can they compare this with corresponding assumptions, or lack of it, in comparable works like (Hardt et al 2023). Experimental Designs Or Analyses: Please see the "Methods and Evaluation Criteria" section for all comments. Supplementary Material: I went through the appendix but not the code. Relation To Broader Scientific Literature: This paper provides an efficient way for consumers to determine their success metric if they collude. This is a strict improvement over the known collusion results of (Hardt et al 2023). Their related works section is thorough. Their discussion section also clearly connects their work and its limitations to other related problems. Essential References Not Discussed: Most essential references were discussed. A recent reference is (Ben-Dov et al 2024) which the authors might have missed. This reference uses a Distributionally Robust Optimization Approach but with infinite samples like (Hardt et al 2023). Other Strengths And Weaknesses: ## Other Weakness - **Convex action (Sections 4.1 \& 4.2, Hardt et al 2023)**: The authors did not extend their finite-sample guarantees to the case of collusion when the goal is to minimize a convex empirical risk as described in (Hardt et al 2023). This is an extremely important application of these frameworks. Could the authors comment on whether their results could be directly applied in these cases? Would it be possible if the model classes have strong uniform generalization bounds? If applying their scheme is hard, then they can just point out the main difficulties in this setup. Other Comments Or Suggestions: - **Optimal value of $n_{est}$**: In signal unplanting, the authors provide experiments on there being an optimal value of $n_{est}$ in Figure 3(a). It seems like the authors can provide theoretically optimal values of $n_{est}$ as well, atleast orderwise. - **Typos**: 1. Line 173, 2nd column : "Fo" -> "For". 2. Line 876 : "than" -> "as". 3. Lines 913-914 : two "conditionnally" -> "conditionally". Questions For Authors: - Line 428, 1st column : "This mirrors the shape of curves observed in signal unplanting in Figure 3, which would also appear in Figure 1 and Figure 2". Unfortunately, I am not able to identify any staircases in Figures 1-3 and all of them look more like noisy sigmoids. - Does the bound on $\eta< \frac{1}{\\# \tilde{\mathcal{X}}}$ result in additional constraints in Theorem 3.9? Also, in Lines 1105-1106, can the poor value of the bound in Theorem 3.9 be possibly improved by slightly changing the definition of $\eta$ to a quantity in expecatation, or is it unavoidable? - What was the issue with the earlier proof of (Hardt et al 2023) for $\epsilon>0$ (Appendix F.4)? Does Definition F.3 which is the current version of (Hardt et al 2023) address it ? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed and technical review, which allows for a discussion of some of the paper’s theoretical aspects. We also appreciate their positive feedback and will now address the questions raised. *Better empirical performance than (Hardt et al 2023) in Figure 4 for large number of samples* Our bounds are not only empirically superior but also theoretically better (see Proposition F.2, for example). *Either more experiments on different datasets/settings should be done or adequate theory provided for such staircase-property before such claims could be made* For a finite signal set, we can theoretically prove that our bounds exhibit a staircase shape, unlike Hardt et al., just by using law of total probability and conditioning on the possible values for $g(x)$. We can include a rigorous proof in the paper if it is considered helpful. *Absence of experiments on signal erasure task / dataset not real* We refer to our response to Reviewer HD8o. *Synthetic Dataset with Label Imbalance* The dataset is truly synthetic and we intentionally introduced this imbalance. *Reproducibility* We fixed the random seed for our experiments (see src file). *Justification of Assumptions 1 and 2* The parameters $\eta$ and $\tau$ are very different. While $\eta$ relates to the nature of the base distribution and ensures that the most frequent label is sufficiently separated from the second most frequent one, $\tau$ reflects how similar the probability of a label given $x$ is to its probability given $g(x)$. In other words, $\tau$ captures the extent to which $g$ already influences the platform’s decisions, whereas $\eta$ is a simple assumption about the base distribution, entirely independent of $g$ or any other parameters specific to the collective. Regarding Assumption 2, we refer to our response to Reviewer 6HMD. *Essential References Not Discussed* Indeed, the paper cited by the reviewer is relevant, and we will ensure it is included in the references. *Convex action* As mentioned by the reviewer, convex risk minimization is also an important framework. We believe that gradient-based learning, as outlined by Hardt et al., is also an important framework. While the focus of our paper is only on classification, exploring these setups is an interesting direction for future work. We believe that our setting could be extended to this case even without assuming strong uniform generalization bounds: one can simply estimate gradients directly using samples from the base distribution and the gradient-neutralizing distribution. One additional difficulty is to compute a gradient-neutralizing strategy. Within our setting, the collective does not directly have access to expected gradients but rather to estimations and the collective may only be able to compute an approximate gradient-neutralizing strategy. This would yield additional error terms. *Identify staircases in Figures 1-3* The steps in the staircase are closely spaced, and the bounds are computed only for fixed values of $n$; this is why the staircase resembles a sigmoid curve. *Bound on $\eta$* We do not assume that $\eta < \frac{1}{Card(\tilde{\mathcal{X}})}$. Regarding the second part of the reviewer’s remark: it is possible to redefine $\eta$ to discard features not in $\tilde{\mathcal{X}}_0 \subseteq \tilde{\mathcal{X}}$, where $\tilde{\mathcal{X}}_0$ is a subset of $\tilde{\mathcal{X}}$ such that $\tilde{x} \sim \tilde{\mathcal{D}}$ has a high probability of belonging to $\tilde{\mathcal{X}}_0$. This aligns with the reviewer’s intuition of asking whether a similar definition of $\eta$ in expectation is possible. In this case, we can derive theoretical guarantees by conditioning the success and saying that the success is equal to the same probability conditional on the fact that the feature $g(x)$ is in $\tilde{\mathcal{X}}_0$ times the probability that $g(x)$ is in $\tilde{\mathcal{X}}_0$, plus a term that depends on the probability of $g(x)$ not being in $\tilde{\mathcal{X}}_0$, that we can typically discard. *Issue with the earlier proof of (Hardt et al., 2023)* There was a mistake in algebraic manipulations leading to an erroneous bound. In the current version of Hardt et al., Definition F.3 is used, but the bounds remain suboptimal due to unnecessary inequalities. We sincerely thank the reviewer for the thorough review and for pointing out typos. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. My questions have been answered, and I do not have any further questions. Just one remark. For bound on $\eta$, take Assumption A1 and sum it over all $x\tilde{\mathcal{X}}$. The LHS is still a probability so it can be bounded by $1$ and the RHS can be lower bounded by $\eta |\mathcal{X}|$. This is a consequence of Assumption A1.
Summary: The paper explores the statistical inference for collective action in learning platforms, in particular, the paper examines not only the classic signal planting procedure introduced by Hardt et al, but also signal unplanting as well as signal erasing, both of which require statistical inferences for defining the optimal strategy $h$. In the end, the authors develop a framework that provides a theoretical and algorithmic treatment and preform empirical studies on synthetic datasets. Claims And Evidence: The claims are supported by clear and convincing mathematical arguments. Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the problem. Theoretical Claims: I checked the soundness of the theoretical claims. Experimental Designs Or Analyses: I did check the experimental design and they are sound and valid. Supplementary Material: I briefly skim through the proofs for some theorems of the main paper. Relation To Broader Scientific Literature: The key contribution of the paper is on relating statistical inference to the literature of collective action, which is missing in the previous work in the literature. Essential References Not Discussed: NA Other Strengths And Weaknesses: Overall, the paper is well-written and well-explained. The proposed signal planting and signal erasing methods are intuitive. The experimental section is a bit weaker since it's a simulated setting. Another weakness is that some theoretical claims rely on particular assumptions or particular signal strategies, for example, it's unclear to me whether theorem 3.7 would still hold if the signal unplanting strategy is different from equation (3), and whether Theorem 3.9 would still hold if the transformation g is not idempotent. Some discussions on how violation of these assumptions/particular strategy could change/affect the theoretical result could be helpful. Other Comments Or Suggestions: It would be nice if the authors could add some intuitions behind theorem 3.7, especially how it differs from the previous theoretical guarantee (Theorem 3.3). Questions For Authors: 1) How should I understand the inequality in section 3.3.1? 2) Does Theorem 3.7 still hold if we randomly choose a label other than $y^*$ instead of $\hat{y}_{\tilde{x}}$ according to equation (3)? 3) In section 3.4 (signal erasing), it seems like the success of the proposed signal erasing procedure relies on the transformation function g to be idempotent, namely $g(g(x)) = g(x)$. What are some examples of idempotent vs. non-idempotent transformations? How practical is it in real life? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for their time and their insightful comments. As mentioned, our main contribution is introducing a statistical inference component to the collective action framework, which is crucial in practice, as collectives often seek guarantees about the effectiveness of their strategies. We also appreciate the reviewer’s thoughtful questions and will ensure additional clarifications in the final version to enhance intuition and understanding. *It would be nice if the authors could add some intuitions behind theorem 3.7, especially how it differs from the previous theoretical guarantee (Theorem 3.3).* We agree that a discussion here would indeed help provide the reader with more intuition, particularly in comparison to signal planting (Theorem 3.3). In signal planting, the strategy is simple: flood the platform with label $y^*$. In signal unplanting, an intuitive approach is to change the label by choosing the most probable label $\hat{y}\_\tilde{x}$ after $y^∗$ for every $\tilde{x}$, which is exactly the meaning of equation (3). However, $\hat{y}\_\tilde{x}$ is unknown, so we propose estimating it using a subset of the collective of size $n_{est}​$. This results in a two-step inference method, where we first adaptively estimate the label $\hat{y}\_\tilde{x}$, and then use concentration inequalities that depend on $\hat{y}\_\tilde{x}$. To ensure independence and correctly apply Hoeffding’s inequality, we carefully split the datasets of size $n_{est}​$ and the rest of the collective of size $n - n_{est}$. This is why the estimation term for $\Delta_{\tilde{x}}$ in Theorem 3.7 is $R(n - n_{est})$, rather than $R(n)$ in Theorem 3.3. This directly quantifies the impact of not knowing the optimal label to play by the collective (contrary to signal planting where the optimal label to play is known): the counteracting term in the bound is more important since $R(n - n_{est}) > R(n)$. *1.How should I understand the inequality in section 3.3.1?* and *2. Does Theorem 3.7 still hold if we randomly choose a label other than $y^\*$ instead of $\hat{y}\_\tilde{x}$ according to equation (3)?* To build intuition, let’s consider the following simplified case: the signal set $\tilde{\mathcal{X}}$ consists of a single feature, and the label frequencies associated with the training set without the collective are as follows: 100 for $y_0=:y^*$, 70 for $y_1$​, 60 for $y_2$​, and 50 for $y_3$​. In this case, the optimal strategy seems to be estimating the most probable label $\hat{y}$ after $y_0$ (here,$\hat{y} = y_1$​) and flooding the platform with it. However, $\hat{y}$​ is unknown by the collective. The collective could naively flood the platform with either $y_2$​ or $y_3​$ as well. This corresponds to planting the signal with those labels, which is exactly the meaning of the inequality in Section 3.3.1. A more effective approach is to estimate $\hat{y}​$, which we formalize in the paper. Regarding the suggestion of using randomness: while it is possible, we believe it is suboptimal, even compared to naive strategies, which is why we did not study it. To get convinced, assume a collective of size $n=60$. Naively flooding the platform with $y_i$ ($i \in \{1,2,3\}$)​ succeeds since $y_i$​ becomes the most frequent label for $\tilde{x}$. However, choosing a label randomly among those $\neq y^*$ leads to expected frequencies: $y_0: 100$, $y_1: \approx 90 < 100$, $y_2: \approx 80 < 100$, and $y_3: \approx 70 < 100$, resulting in an unsuccessful collective action. *3. In section 3.4 (signal erasing), it seems like the success of the proposed signal erasing procedure relies on the transformation function $g$ to be idempotent. What are some examples of idempotent vs. non-idempotent transformations? How practical is it in real life?* We provide examples of idempotent functions $g$ in lines 311-319. This assumption is generally not restrictive, especially in tabular data or opaque watermarking. **Concerning weaknesses:** *The experimental section is a bit weaker since it's a simulated setting* We refer to our response to Reviewer HD8o. *Another weakness is that some theoretical claims rely on particular assumptions or particular signal strategies* We study strategies that are intuitively optimal and provide theoretical guarantees. Establishing a unifying framework for analyzing strategy optimality may be an interesting direction for future work. The guarantees we derive depend on assumptions, such as the idempotency of $g$ for signal erasing (line 1025). We do not claim that this assumption is necessary, but we have not found a way to derive guarantees without it. If helpful, we can clarify why we use this assumption, in contrast to Hardt et al. We sincerely thank the reviewer for their valuable feedback, which helps improve the paper, and we hope we have addressed their concerns.
null
null
null
null
null
null
Origin Identification for Text-Guided Image-to-Image Diffusion Models
Accept (poster)
Summary: This paper introduces a new task, identifying the original image for a generated image from text-guided image-to-image translation with diffusion models, which helps prevent the misuse of the generated content such as misinformation and copyright infringement. To deal with this problem, the authors build a dataset OriPID and propose a novel method with theoretical derivations. Although some questions remain regarding the claimed "theoretical guarantee" and generalization to other translation methods, the experimental analysis appears thorough and comprehensive. The paper is well-written and well-structured. ## update after rebuttal The authors have addressed my concerns and I recommend accepting this paper. Claims And Evidence: Most of the claims are supported by clear and convincing evidence: 1. The proposed dataset OriPID supports the experimental evaluations of the origin identification problem, in both seen and unseen settings regarding visual discrepancy between different diffusion models as shown in Figure 2. 2. Existing methods have difficulties in handling the proposed task. They either completely fail (pre-trained deep embedding models in Table 2), or cannot generalize to unseen scenarios (fine-tuned similarity-based methods and speciallized domain generalization methods in Table 4). 3. The proposed method is supported with theoretical derivation and implementation details. It achieves good performance in origin identification in both seen and unseen settings as in Table 4, with advantages in generalization, efficiency and robustness. The ablation studies further demonstrate the efficacy of the proposed method. Methods And Evaluation Criteria: The proposed method, linearly transforming the VAE embeddings which is learned using a metric learning loss, and the evaluation criteria in mAP and Acc makes sense. Theoretical Claims: In the proof of Lemma 1, the derivation of z_0' in Eq. (13) appears problematic. The estimation of noise by the trained network is a step-by-step process, meaning z_0' cannot be directly obtained by reversing Eq. (12) in a single step. This makes the reasoning in Line 235-239 for Theorem 1 questionable. Experimental Designs Or Analyses: I checked all the experimental designs and analyses and found no further issues. Supplementary Material: I reviewed all parts of the supplementary material. Relation To Broader Scientific Literature: This paper proposes a novel task with an effective solution to address the potential misuse of text-guided image-to-image tranlsation results by diffusion models. The benchmark built in this work and the proposed method should further facilitate the exploration of generative content detection and tracing for security concerns. Essential References Not Discussed: One series of related works that is missing concerns text-guided image-to-image translation using diffusion models. The scope of translation mechanisms that the proposed method can cover remains unclear. There are some aspects that I think could impact the effectiveness of the proposed method: 1. How do you encode the original image during translation? In Line 750, "the default mode in the AutoPipelineForImage2Image of diffusers" respresents directly adding noise to the original image latents as in SDEdit [1]? What if you use DDIM inversion [2] as in prompt-to-prompt [3]? How dose the denoising strength affect the results? If the denoising strength is high, the translation results could largely rely on the text prompts. Does the method still performs well with denoising strength 1.0? Does it still make sense to retrieve the original image if the generated image bears little resemblance to it (Figure 10 (c) indoor)? How does the CFG affect the results? 2. InstructPix2Pix and IP-Adapter encode the original image through VAE latents concatenation and CLIP embeddings, which is discussed in Appendix G. Figure 13 (b) is a bit confusing. 3. Does attention control affect the results? Such as in prompt-to-prompt [3] and plug-and-play [4]? [1] Meng, Chenlin, et al. "SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations." International Conference on Learning Representations, 2022. [2] Song, Jiaming, Chenlin Meng, and Stefano Ermon. "Denoising Diffusion Implicit Models." International Conference on Learning Representations, 2021. [3] Hertz, Amir, et al. "Prompt-to-Prompt Image Editing with Cross-Attention Control." International Conference on Learning Representations, 2023. [4] Tumanyan, Narek, et al. "Plug-and-play diffusion features for text-driven image-to-image translation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. Other Strengths And Weaknesses: Strengths: 1. This paper proposes an important and interesting task. 2. Most of the claims are supported with evidence as mentioned above. 3. The paper is well-written and easy to follow. Weaknesses: 1. There are questions in the theoretical derivation. 2. Some related works are missing with sufficient discussion/comparisons, which could impact the scope of this work. Other Comments Or Suggestions: In the caption of Figure 10, there are 3 different subjects instead of 6? Questions For Authors: Please refer to the weaknesses mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your efforts in reviewing our paper. We are encouraged that you find: (1) Most of claims are supported by **clear** and **convincing** evidence; (2) this paper proposes an **important**, **novel**, and **interesting** task; (3) the analysis appears **thorough** and **comprehensive**; (4) the paper is **well-written**, **well-structured**, and **easy to follow**. We address your questions below and will add these into the final version. **Q1. In the proof of Lemma 1, the derivation of z_0' in Eq. 13 appears problematic.** A1. We apologize for the confusion. We acknowledge that, in practice, this process is typically iterative, refining $z_t$ over many steps. However, Eq. 13 does not bypass the chain; rather, it is a **mathematically equivalent alternative**, as shown in [1] (See Eq. 15 and *Progressive generation*) and [2] (See Eq. 12 in https://shorturl.at/xhnd7). Furthermore, [1] also explicitly mentions that *“there is also the possibility of predicting $x_0$”*, but this gave worse empirical quality than iterative noise-prediction. In other words, Eq. 13 is a convenient analytical step; practically, the preference for multi-step denoising is due to empirical gains from refining the prediction gradually. Please also see a discussion here: https://shorturl.at/RBZER. **Nevertheless, we are happy to further discuss and revise Eq. 12 to Eq. 13.** [1] Denoising Diffusion Probabilistic Models, NeurIPS 2020 [2] Improved Denoising Diffusion Probabilistic Models, ICML 2021 **Q2. How do you encode the origin? Do you directly add noise to the origin latents as SDEdit? Does DDIM inversion and attention control affect results, such as in prompt-to-prompt and plug-and-play?** A2. Thank you for the insightful question! The origins are first encoded by VAE, and noise is directly added to the resulting image latents, as in SDEdit. Nevertheless, *DDIM inversion* and *attention control* do ***not challenge the generalizability of our method***. Experiments are shown below. **Experimental Setup.** Since Prompt-to-Prompt cannot edit real images, we adopt its improved version, EDICT [1], a re-formulation of the DDIM process that allows mathematically exact inversion. Specifically, we: - select 1,000 origins and ask GPT-4o to generate inversion prompts; - input the inversion prompts, origins, and guidance prompts into EDICT and Plug-and-Play to generate translations. Here, we follow their original parameters; - search these queries in 1,000,000 images, consisting of origins and distractor images. **Experimental Results.** The table below shows that our model, trained on **SD 2** and the **SDEdit** scheme, successfully generalizes to these **new schemes**. Again, these experiments **validate** our Hypothesis 1, i.e., the **generalization boundary** of our method is determined by ***whether the generated image is conditioned on a VAE-encoded original image***. |Methods|mAP|Acc| |-|-|-| |EDICT (SD 1.4)|89.0|86.6| |Plug-and-Play (SD 1.5)|99.8|99.7| [1] Edict: Exact diffusion inversion via coupled transformations, CVPR 2023 **Q3. The influence of the denoising strength. Does it still make sense to retrieve the origin if the generated image bears little resemblance to it (Fig. 10 (c) indoor)?** A3. Thank you for the insightful question! The influence of denoising strength is discussed in **Q1** by **Reviewer PzDk**. We acknowledge that it is challenging to clearly define resemblance. Nevertheless, the origin and generations in Fig. 10 (c) share similarities, such as the pool table, indoor setting, lighting placement, viewpoint, and room layout. We argue that retrieving the origins at very large strengths does not make sense, as shown by our failures: https://huggingface.co/datasets/ICML2025Rebuttal/ICML2025_Rebuttal/resolve/main/fail_example.pdf **Q4. How does CFG affect results?** A4. Thank you for the insightful question! In our paper, experiments are conducted at CFG=7.5, which is default for most diffusion models. The table below shows our method **performs well** across many commonly-used CFGs. These experiments are obtained by training on SD 2 at CFG=7.5 and testing on ColorfulXL at varying CFGs. |CFG|3.5|4.5|5.5|6.5|7.5|8.5|9.5|10.5| |-|-|-|-|-|-|-|-|-| |mAP|94.9|93.9|92.3|90.5|88.8|87.7|86.3|84.8| |Acc|94.0|92.8|91.0|89.0|87.1|85.9|84.4|82.6| **Q5. InstructPix2Pix and IP-Adapter encode the origin through VAE latents concatenation and CLIP embeddings, which is discussed in App. G. Fig. 13 (b) is a bit confusing.** A5. Thank you for the kind reminder! Fig. 13 (b) aims to provide a unified perspective on how InstructPix2Pix and IP-Adapter encode the origin. We apologize for omitting the concatenation in InstructPix2Pix. Following your advice, we will replace Fig. 13 (b) in the **App. G** with textual description in the **Related Work**. **Q6. In Fig. 10, there are 3 subjects instead of 6?** A6. Thank you for the kind reminder! You're right—we'll fix it to **3** in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough and detailed response! Most of my concerns are addressed and I have no further questions. Congratulations on this interesting work! --- Reply to Comment 1.1.1: Comment: We are grateful for your thoughtful and comprehensive feedback, and we are delighted to hear that we have successfully addressed the concerns raised!
Summary: This paper introduces the Origin Identification for Text-Guided Image-to-Image Diffusion Models (ID^2) task, aiming to retrieve the original image of a given translated query. The paper highlights the risks of misuse, including misinformation, copyright infringement, and evading content tracing. A key contribution is OriPID, a dataset containing a large-scale reference set of images and guided prompts, designed to test the generalizability of ID2 models across different diffusion models. The paper also presents a theoretically guaranteed method that minimizes the distance between Variational Autoencoder embeddings of generated samples and their origins through a learned linear transformation, demonstrating generalizability across different diffusion models. Experimental results show a +31.6% mAP improvement over similarity-based retrieval methods. ## update after rebuttal All of my questions have been fully addressed, and I appreciate the thoughtful and detailed responses. I maintain my positive rating. Claims And Evidence: The paper claims that: 1. Similarity-based methods fail to generalize across diffusion models → Supported by experiments, e.g., Table 4, Section 5.2, showing a sharp drop in mAP when training and testing on different models . 2. A linear transformation can bridge the gap in VAE embeddings across models → Proven theoretically and empirically, with experiments demonstrating improved performance. However, the claim of generalizability to all diffusion models lacks thorough validation, as only specific models (e.g., Stable Diffusion variants) are tested. The prerformance on other diffusion-based models, e.g., FLUX.1-dev, is not explored. Methods And Evaluation Criteria: The evaluation metrics (mAP, top-1 accuracy) are standard and appropriate. The training/testing split across different models is a strong design choice to assess generalization. Theoretical Claims: The paper provides proofs for the existence and generalizability of a linear transformation that aligns VAE embeddings of original and modified images. The mathematical foundation appears correct. Experimental Designs Or Analyses: The experiments are well-structured, covering: - Performance across seven diffusion models. - Efficiency comparisons with similarity-based and generalization-focused methods. However, the paper does not include visualizations of successfully identifying the origin of a generated query, which would make the claim stronger. Visualizing retrieval cases, including both successes and failures, would significantly improve interpretability. Supplementary Material: Yes, I have reviewed Supp. G and Fig. 10. Relation To Broader Scientific Literature: The paper contributes to image modification detection within diffusion models. Essential References Not Discussed: None Other Strengths And Weaknesses: - Strength - The proposed linear transformation approach is simple, efficient, and theoretically grounded. - OriPID dataset provides a strong benchmark for future work on ID2. - Extensive experiments demonstrate generalization across multiple diffusion models. - Weakness - The paper does not include visualizations of successful origin identification results, which would make the claim stronger. - Does not discuss commonly used diffusion-based model, FLUX.1-dev. - Current T2I and I2I models also contributed by auto-regressive models, e.g., Janus-pro[A], emu3[B], VAR[C], etc. However, this paper only works on diffusion-based methods. - Limited real-world validation: The dataset is synthetic, and real-world image alterations (e.g., Photoshop edits) are not explored. References: [A] Chen et al., Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling. ArXiv, 2025 [B] Wang et al., Emu3: Next-Token Prediction is All You Need. ArXiv, 2024. [C] Tien et al., Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction. NeurIPS 2024 Other Comments Or Suggestions: Contribution Point 3 in introduction lists seven performance numbers but does not directly specify which numbers correspond to which models. I would suggest either list them explicitly or provide an averaged score. Questions For Authors: - What happens if an image is generated by an auto-regressive model and later modified by a diffusion model? How well would ID2 handle this scenario? - Have the paper tested the proposed approach on real-world, manually edited images (e.g., Photoshop modifications or any random AI editing tools)? Would the method generalize to such cases? - Will the paper release the OriPID dataset? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your efforts in reviewing our paper. We are encouraged that you find: (1) the proposed linear transformation approach is **simple**, **efficient**, and **theoretically grounded**; (2) OriPID dataset provides a **strong benchmark** for **future work** on ID$^2$; (3) Extensive experiments demonstrate **generalization** across **multiple diffusion models.** We address your questions below and will incorporate all your suggestions into the final version of our paper. **Q1. The prerformance on other diffusion-based models, e.g., FLUX.1-dev, is not explored.** A1. Thank you for this valuable suggestion. Accordingly, we evaluate our proposed method on images generated by FLUX.1-dev. The experiments below show that our method generalizes well to FLUX.1-dev. |Methods|mAP|Acc| |---|---|---| |Circle loss|71.2|67.1| |SoftMax|65.1|61.7| |CosFace|68.9|64.2| |IBN-Net|72.2|68.5| |TransMatcher|76.9|72.6| |QAConv-GS|76.8|71.9| |VAE Embed. (Ours)|52.3|47.6| |**Linear Trans. VAE (Ours)**|**88.9**|**87.1**| **Q2. The paper does not include visualizations of successful/failed origin identification results, which would make the claim stronger.** A2. Thank you for this valuable suggestion. Accordingly, we visualize several successful origin identification results, available at https://huggingface.co/datasets/ICML2025Rebuttal/ICML2025_Rebuttal/resolve/main/success.pdf. Our observations indicate that the origin of queries—spanning various topics and generated by different diffusion models—can be effectively traced. The failure cases are available in the Appendix (Section E). **Q3. Current T2I and I2I models also contributed by auto-regressive models. However, this paper only works on diffusion-based methods.** A3. Thank you for your valuable comment. We acknowledge these autoregression advancements and the current scope of our work is indeed focused on diffusion-based methods. This is because diffusion models remain a cornerstone in both research and industry due to their proven reliability and high-quality results. As a result, the dataset, method, and theory we introduce are directly applicable to practical scenarios and thus useful. Nevertheless, we agree that exploring auto-regressive approaches is an important future direction, and we plan to investigate them in our subsequent work. **Q4. Limited real-world validation: The dataset is synthetic, and real-world image alterations (e.g., Photoshop edits) are not explored.** A4. Thanks for this valuable suggestion. Accordingly, we evaluate our method on a new dataset, SEED-Data-Edit [1], which contains 52k image editing samples. These samples were collected from **amateur photographers** who posted their images along with editing requests. **Photoshop** experts then fulfilled these requests, providing the edited images as target images. Experimentally, we (1) de-duplicate to get 10,274 image pairs; and (2) treat the edited (target) images as queries and search for them within a pool consisting of their origins along with 1,000,000 distractor images. The experiments below show that: (1) our method generalizes effectively to real-world, manually edited images; and (2) it achieves the best performance compared to all competing methods. |Methods|mAP|Acc| |---|---|---| |Circle loss|76.6|74.5| |SoftMax|76.2|73.2| |CosFace|73.1|70.1| |IBN-Net|75.4|72.1| |TransMatcher|78.3|76.4| |QAConv-GS|74.4|71.9| |VAE Embed. (Ours)|66.6|64.6| | **Linear Trans. VAE (Ours)**|**86.6**|**85.5**| [1] Seed-data-edit technical report: A hybrid dataset for instructional image editing **Q5. Contribution Point 3 in introduction lists seven performance numbers but does not directly specify which numbers correspond to which models.** A5. Thanks for this suggestion. We will list them explicitly in the final version: *(2) the effectiveness of our proposed method: it achieves 88.8%, 81.5%, 87.3%, 89.3%, 85.7%, 85.7%, and 90.3% mAP, respectively, for Stable Diffusion 2, Stable Diffusion XL, OpenDalle, ColorfulXL, Kandinsky-3, Stable Diffusion 3, and Kolors.* **Q6. What happens if an image is generated by an auto-regressive model and later modified by a diffusion model?** A6. Thank you for the question. In response, we (1) generate 5,000 images using Janus-Pro-7B and modify them with ColorfulXL, and (2) use the modified images as queries to search within a pool containing the original Janus-Pro-7B outputs along with 1,000,000 distractor images. The experimental results indicate that our method achieves **similar performance** whether the origin is a real image or one generated by an auto-regressive model. |Origin|mAP|Acc| |---|---|---| |Real|89.3|87.7| |Auto-regressive|90.5|88.4| **Q7. Will the paper release the OriPID dataset?** A7. Yes! All the proposed datasets (including training, query, and gallery images) and all code (including training, testing, and dataset-generation code) will be made publicly available to facilitate future research. --- Rebuttal Comment 1.1: Comment: Thank you for the authors’ clarifications. All of my questions have been fully addressed, and I appreciate the thoughtful and detailed responses. I will maintain my positive rating. Well done! --- Reply to Comment 1.1.1: Comment: We are pleased to know that our efforts have satisfactorily addressed the concerns raised, and we sincerely appreciate your insightful and thorough reviews!
Summary: - This paper introduces a new problem, "origin identification", for text‐guided image‐to‐image diffusion models, with the goal of retrieval the original image given a query image that was transformed by a text‐conditioned diffusion model. - The paper proposes a new dataset OriPID, containing original images, text prompts, and query images produced by seven popular diffusion models. - The paper also provides a novel retrieval‐based method learns a single linear transformation so that an original and its generated variant lie close in the transformed embedding space. - The paper includes theoretical arguments showing such a linear transformation should exist, and it should generalize to unseen diffusion models. Claims And Evidence: 1. **Claim:** The paper claims that for a well‐trained text‐to‐image diffusion model and its VAE encoder, one can learn a single linear transform that maps each generated‐image embedding close to its original‐image embedding. **Evidence:** Theorem 1 show the derivations and proves it. Empirical results in Table. 2 also validates it. 2. **Claim:** The same linear transform can work well for images generated by other unseen diffusion models. **Evidence:** Theorem 2 show the derivations and proves it. Empirical results in Fig. 9 also validates it. Methods And Evaluation Criteria: 1. The proposed method simply includes VAE embeddings, linear transformation. 2. For the proposed origin identification task, the paper uses mAP and accuracy for evaluation. 3. The proposed methods and evaluation criteria make sense and intuitive. Theoretical Claims: See the Claims And Evidence section. Experimental Designs Or Analyses: 1. The paper conducted extensive experiments on the proposed OriPID dataset, and compare the transformation matrix with baseline methods, including classification models, self‐supervised models, vision‐language models, and image copy detection models. 2. The paper ablate the proposed method by using different VAE encoders, loss functions, and different matrix rank, showing the propose method is insensitive to VAE encoder, and MLP with activation leads to overfitting. Supplementary Material: Yes, includes proofs of lemmas, GPT-4o prompts, more experiment results, and failure cases. Relation To Broader Scientific Literature: 1. The proposed task is related to image copy detection, domain generalization, and text-guided image editing. 2. The paper underscores the issue of manipulated images for malicious or illegal ends, aligning with concerns in generative‐model detection. Essential References Not Discussed: No essential references missed. Other Strengths And Weaknesses: **Strengths** 1. The paper offers a large‐scale benchmark (OriPID), carefully curated, which should help standardize evaluations in this new domain. 2. The proposed solution is simple yet theoretically motivated and empirically superior to baselines. **Weaknesses** 1. The proposed dataset does not containing the editing strength, and the paper does not show how will the proposed method perform with different editing strength. Other Comments Or Suggestions: See previous sections Questions For Authors: See previous sections Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your efforts in reviewing our paper. We are encouraged that you find our work (1) provides a **large-scale** and **carefully curated** benchmark, (2) proposes a **novel** retrieval-based method, (3) includes **theoretical** arguments, (4) is **intuitive** and **make sense**, and (5) is **empirically superior** to baselines. We address your questions below and will incorporate all your suggestions into the final version of our paper. **Q1. The proposed dataset does not containing the editing strength, and the paper does not show how will the proposed method perform with different editing strength.** A1. Thanks for your kind reminders. During testing, the editing strengths for Stable Diffusion 2, Stable Diffusion XL, OpenDalle, ColorfulXL, Kandinsky-3, Stable Diffusion 3, and Kolors, are 0.9, 0.8, 0.7, 0.7, 0.6, 0.8, and 0.7, respectively. The editing strengths used in testing are manually set to prevent significant visual differences between the generated images and the original ones. During training, the editing strength for Stable Diffusion 2 is 0.9. The table below shows how the proposed method performs under different editing strengths. We observe that although the training and testing images come from ***different diffusion models*** with ***varying editing strengths***, the performance of our method remains consistently **high** across **most** editing strengths. It is important to note that: - `strength = 1` means it's almost like generating from pure noise, which is approximately equivalent to text-to-image generation. Therefore, it is reasonable that we cannot find the origins in that case. - As shown in https://huggingface.co/datasets/ICML2025Rebuttal/ICML2025_Rebuttal/resolve/main/fail_example.pdf, we give some examples of strengths where our method fails. These queries are indeed very visually dissimilar with the origins. - We do **not** change the training editing strength for Stable Diffusion 2 while varying test editing strength. That means our method is also generalizable across varying editing strengths. | mAP | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 |1.0 | |------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----| | Stable Diffusion 2 | 100.0 | 100.0 | 100.0 | 99.9 | 99.9 | 99.8 | 99.2 | 97.5 | 88.8 |43.2 | | Stable Diffusion XL | 99.9 | 99.9 | 99.9| 99.8 | 99.5 | 99.1| 97.9 | 81.5 | 68.1 |19.3| | OpenDalle | 100.0 | 100.0 | 99.9 | 99.9 | 99.6 | 98.2 | 87.3 | 49.5 | 13.2 | 1.8| | ColorfulXL | 100.0 | 99.9 | 99.9 | 99. 8 | 99.4 | 97.9 | 89.3 | 61.0 | 19.5 |2.6 | | Kandinsky-3 | 100.0 | 99.9 | 99.7 | 99.2 | 97.6 | 85.7 | 61.9 | 14.5 | 2.1 | 0.0| | Stable Diffusion 3 | 100.0 | 100.0 | 99.9 | 99.9 | 99.7 | 99.4 | 98.5 | 85.7 | 30.1 |0.0| | Kolors | 99.9 | 99.8 | 99.6 | 98.9 | 97.4 | 92.1 | 90.3 | 70.9 | 24.5 | 2.2 | | Acc | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 |1.0 | |------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----| | Stable Diffusion 2 | 100.0 | 100.0 | 100.0 | 99.9 | 99.8 | 99.7 | 99.0 | 97.1 | 86.6 |37.7 | | Stable Diffusion XL | 99.9 |99.9| 99.8 | 99.7 | 99.3 | 99.0 | 97.5 | 78.8 | 63.9 |15.7| | OpenDalle | 100.0 | 100.0 | 99.9 | 99.8 | 99.4 | 97.8 | 85.3 | 45.4 | 10.7 |1.2 | | ColorfulXL | 99.9 | 99.9 | 99.8 | 99.7 | 99.2 | 97.5 | 87.7 | 57.1 | 16.5 | 1.8| | Kandinsky-3 | 100.0 | 99.9 | 99.7 | 99.1 | 97.1 | 83.3 | 57.2 | 11.4 | 1.5 |0.0 | | Stable Diffusion 3 | 100.0 | 99.9 | 99.9 | 99.8 | 99.6 | 99.2 | 98.1 | 82.9 | 25.6 | 0.0| | Kolors | 99.9 | 99.8 | 99.5 | 98.6 | 96.9| 90.8 | 88.8 | 67.4 | 20.5 | 1.5 |
Summary: This paper introduces the ''Origin Identification'' task for text-guided image-to-image diffusion models, aiming to retrieve the original image of a given modified image generated by diffusion models. The motivation for this task stems from security concerns, including misinformation, copyright infringement, and content tracing evasion. Moreover, this paper proposes a novel dataset, containing 100,000 original images with 20 guided prompts per image and 2,000,000 training images, and designs a theoretically guaranteed identification method. Claims And Evidence: Overall, this paper's claims are well-supported by theoretical arguments and experimental evidence. However, some claims could benefit from further clarification or additional empirical analysis: 1. Testing on more diverse diffusion models (e.g., InstructPix2Pix, IP-Adapter) would strengthen this claim. 2. This paper claims that the linear transformation approach is the best way to generalize across models, but it does not compare against other potential transformations (e.g., non-linear embeddings). Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the origin identification task. The OriPID dataset, linear transformation method, and evaluation metrics effectively address the task. Theoretical Claims: The paper makes two major theoretical claims (Existence of a Linear Transformation that Minimizes the Distance Between Original and Generated Image Embeddings, Generalizability of the Linear Transformation Across Diffusion Models), each supported by formal proofs. Experimental Designs Or Analyses: The experimental design is generally well-structured, with appropriate datasets, strong baseline comparisons, and well-justified evaluation metrics. However, some aspects could be improved: 1. Introducing hard negative mining (i.e., selecting the most confusing negatives to refine the model) and reporting error analysis on top failure cases would improve the quality of this paper. 2. Add adversarial robustness evaluations or test against image compression and resizing distortions commonly seen in social media uploads. Supplementary Material: I reviewed the supplementary sections, including the proofs, dataset details, additional experiments, and failure case analyses. Overall, the supplementary material is well-structured and informative. Relation To Broader Scientific Literature: This paper's contributions align with several key areas in AIGC, AI security, particularly in image provenance, diffusion models, and content attribution. Essential References Not Discussed: This paper does a good job of covering related literature in image retrieval, diffusion models, and AI-generated content detection. Other Strengths And Weaknesses: This paper presents a well-executed study on origin identification for text-guided diffusion models, with notable strengths in problem formulation, theoretical grounding, and practical evaluation. However, some limitations remain in generalization, robustness, and interpretability: 1. The authors only tested on VAE-based diffusion models, it would be better to discuss the generalizationability of the proposed method for the future diffusion models, which may use different encoders. 2. The proposed method is only tested on Gaussian blur and JPEG compression, but real-world adversaries may apply more advanced modifications. 3. No evaluation on image cropping, resizing, or watermarking. 4. No insight into why some transformations fail. This paper does not deeply analyze failure cases beyond mentioning hard negatives. Other Comments Or Suggestions: See above. Questions For Authors: Are the proposed datasets and codes publicly available to the academic community? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your efforts in reviewing our paper. We are encouraged that you find: (1) this paper's claims are **well-supported**; (2) the proposed methods and evaluation criteria are **well-aligned** with the origin identification task; (3) the experimental design is generally **well-structured** with **well-justified** evaluation metrics; (4) the supplementary is **well-structured**; and (5) this paper presents a **well-executed** study. We address your questions below and will add these into the final version. **Q1. Testing on more diverse diffusion models (e.g., InstructPix2Pix, IP-Adapter) would strengthen this claim. The authors only tested on VAE-based diffusion models.** A1. Thank you for your kind reminder. As shown in the Appendix (**Table 9** and **Sec. G**), we have evaluated our model’s performance on InstructPix2Pix and IP-Adapter, which use VAE and CLIP to encode original images, respectively. The experiments show that our method successfully generalizes to InstructPix2Pix but fails on IP-Adapter. Based on these experimental results and the theoretical analysis in the main paper, we argue that the **generalization boundary** of our method is ***whether the generated image is conditioned on a VAE-encoded original image***. Notably, this VAE can vary in *architecture* and *parameters*—such as *Stable Diffusion*, *Kolors*, and *Flux*, and the conditioning schemes can also differ—such as those used in *SDEdit* and *InstructPix2Pix* (shown in our paper), or *Prompt-to-Prompt* and *Plug-and-Play* (see experiments in the **Q2** of Reviewer **gjdS**). Although this generalization boundary does not cover all possible scenarios, our method remains **practically valuable** because: (1) most current image-to-image methods indeed utilize a VAE to encode original images; and (2) as shown in Table 9, in fact, no existing methods succeed on IP-Adapter. In conclusion, our method is **currently the most effective one with substantially superior performance**. Nevertheless, we acknowledge the importance of overcoming this generalization boundary and consider it as future work. **Q2. This paper does not compare linear transformation against other transformations.** A2. Thank you for your insightful question. As shown in **Fig. 9** of the main paper and **Sec. F** of the Appendix, we have discussed the transformations of **multilayer perceptrons (MLPs) with activation functions**. We observe that this case leads to an overfitting problem. Here, we add two alternative architectures: **a single convolutional layer** and **a multi-head attention layer**. The below experiments show that: (1) likely due to underfitting, the convolutional layer results in a performance drop; and (2) although the multi-head attention layer marginally improves performance on seen images, its performance on unseen images falls behind our method, due to overfitting. |Method|mAP (Seen)|Acc (Seen)|mAP (Unseen)|Acc (Unseen)| |-|-|-|-|-| |Embeddings of VAE|51.0|47.0|46.9|43.0| |Convolution|37.4|33.8|32.5|29.6| |Attention|89.0|87.2|80.7|78.2| |Linear Transformation|88.8|86.6|86.6|84.5| **Q3. Introducing hard negative mining.** A2. Thank you for your valuable suggestion. Following it, we combine our currently-used CosFace with a hard mining triplet loss. However, as shown in the table below, this approach does not bring performance improvement. This result is reasonable, as the original CosFace paper suggests that once we aim to train a large margin between classes, additional hard negative mining becomes unnecessary. |Method|mAP (Seen)|Acc (Seen)|mAP (Unseen)|Acc (Unseen)| |-|-|-|-|-| |With|88.4|86.4|86.8|84.9| |Without|88.8 | 86.6|86.6|84.5| **Q4. No evaluation on image cropping, resizing, or watermarking.** A4. Thank you for your valuable suggestion. Following it, we conduct experiments involving *image cropping*, *resizing*, and *watermarking*. As shown in https://huggingface.co/datasets/ICML2025Rebuttal/ICML2025_Rebuttal/resolve/main/mod.pdf, our method is **relatively robust** against these modifications. **Q5. This paper does not deeply analyze failure cases.** A5. Thanks for this insightful suggestion. We provide an analysis here: our model uses a VAE to compress high-dimensional inputs into a lower-dimensional latent representation. This compression tends to smooth out subtle local details, causing the model to lose critical fine-grained distinctions between similar yet different instances. Furthermore, the imposed prior encourages a uniform distribution in the latent space, forcing nuanced features from distinct instances into overlapping latent representations. These factors reduce the model’s capability to differentiate hard negatives from true positives. **Q6. Are the proposed datasets and codes publicly available?** A6. Yes! All the proposed datasets (including training, query, and gallery images) and all code (including training, testing, and dataset-generation code) will be made publicly available. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses. Most of my concerns and questions have been addressed. I recommend including the additional experimental results and findings in the final version, as some of them are practical and interesting. I have accordingly raised my rating. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the increased rating, and we will make sure to incorporate these experimental results and findings into the camera-ready version.
null
null
null
null
null
null
HyperTree Planning: Enhancing LLM Reasoning via Hierarchical Thinking
Accept (poster)
Summary: This paper proposes a tree-based planning strategy, called **HyperTree Planning (HTP)**, which utilizes a hypertree-structured planning framework. HTP adopts a divide-and-conquer approach, decomposing a complex goal into several sub-goals in a top-down manner. These sub-goals are further decomposed iteratively until they are indivisible. The authors define this process in four stages: selection, expansion, construction, and decision, which together form a hyper-tree structure for further planning. During the plan generation stage, HTP accesses knowledge bases to refine the plan outline, aiming to generate a comprehensive and detailed plan rather than just an outline. The experiments, covering three fundamental models and three widely-used planning benchmarks, demonstrate the effectiveness of HTP. Additionally, ablation studies further validate the proposed strategies. Claims And Evidence: Yes, all the claims made in the submission are supported by experiments. Methods And Evaluation Criteria: The proposed method makes sense, and the evaluation criteria are adopted from the official benchmarks' criteria. Theoretical Claims: There is no proof for a theoretical claim in this paper. Experimental Designs Or Analyses: I believe the experimental design is comprehensive and convincing. The authors validate the effectiveness of their proposed methodology across three foundational models and three planning benchmarks, comparing it to seven previous works. All the experiments demonstrate the superiority of their proposed method. Overall, the soundness and validity of the experiments in this paper are strong. Supplementary Material: I have checked the supplementary material mentioned in the main text. Relation To Broader Scientific Literature: This paper introduces a novel hypertree-based planning strategy that enhances the performance of LLMs on planning tasks. Both the comprehensive studies and additional analyses demonstrate the effectiveness of the proposed methodology. Furthermore, the authors show that the proposed HTP yields more significant performance improvements on tasks with longer reasoning chains, which is particularly impressive and strongly validates its promising future potential. Essential References Not Discussed: The discussion of related work is sufficient, and the experiments cover the necessary baselines. Other Strengths And Weaknesses: In my opinion, this paper is strong, and I have highlighted the strengths in the above parts. However, I do have some concerns regarding the strategy: 1. **Cost concerns**: As shown in Table 2, the cost for TravelPlanner is nearly 3x higher compared to vanilla GPT-4 and Gemini-1.5-Pro, with the issue becoming even more prominent in PlanBench (\~8x) and Natural Plan (\~14x). While these costs are still lower than those of the o1-preview, the large cost disparity on the same foundation model remains a concern. I acknowledge that this is a common issue with all tree-based methodologies because of the huge search space, and I encourage the authors to include more discussion on this matter, as well as potential improvements. 2. **Lack of bad case studies**: Another concern is the apparent lack of bad case studies. There is still a significant gap compared to the saturated performance on the three planning benchmarks. I wonder which stage(s) primarily contribute to the failure cases and why. Are these failures due to the current LLMs' inability to identify potential errors, or is it something else? A detailed analysis of these issues would benefit future work. By the way, I believe the current version meets the standard for ICML, and I consider this a strong paper. However, it would benefit greatly from addressing the aforementioned concerns. Other Comments Or Suggestions: Please refer to the weaknesses mentioned in the last part. Questions For Authors: 1. In the "Selection" stage, the authors mention that they primarily rely on LLM-based evaluation, both for the selection of hyperchains and leaf nodes. Would it be more effective to use training-based process reward models or other model-based evaluators? While the current strategy is certainly more generalizable, could it be improved by incorporating such methods? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful, valuable, and positive comments. We address the concerns in detail as follows. We sincerely hope that our response could properly address your concerns. ### Weakness 1 >discussion on cost concern and potential improvements. We will incorporate **a discussion of computational cost** in the revised version of our paper, as detailed below: - The inference cost of HTP can be higher than conventional backbone models, primarily due to two factors: (1) **the computational cost introduced by multi-path reasoning** (i.e., tree search) and (2) **the hierarchical reasoning mechanism, which decomposes the original task into subproblems, leading to more detailed reasoning**. - **The impact of these factors varies across tasks**. In TravelPlanner, the additional cost mainly comes from the need for detailed reasoning to satisfy constraints (factor 2). In Natural Plan, where we need to find a unique solution through trial and error, tree search (factor 1) is the primary source of overhead. - Despite these additional costs, **HTP remains more computationally efficient than MCTS-based methods**. We have added additional experimental results and explanations on this aspect; please refer to **Weaknesses 1 of Reviewer rT6H**. - We propose two potential improvements: (1) **Using fine-tuned small models instead of large models during the hypertree construction stage**, which could reduce the computational burden; (2) **Predicting task complexity through meta-learning and dynamically adjusting hypertree parameters** (e.g., depth, width) during hypertree construction, allowing for more efficient resource allocation. ### Weakness 2 >Lack of bad case studies for future work We will incorporate **an analysis of bad cases** in our revised paper, as detailed below: - **LLMs struggle with complex single-step reasoning**. In TravelPlanner, given a table of candidate hotels (including hotel name, room type, etc.), the model is required to identify 'entire room' options. However, it often fails by missing correct options or selecting incorrect ones. - **LLMs lack human prior knowledge**. In TravelPlanner, humans naturally use strategies to stay within budget, such as choosing self-driving over flights, selecting the cheapest hotels and restaurants. However, models struggle to leverage such prior knowledge, resulting in **plans that exceed the budget**. - **HTP remains vulnerable to long-horizon errors**. In PlanBench, the model must track and update the environment state after each action. However, any mistake in state updating can propagate errors, leading to incorrect subsequent reasoning and actions. - **HTP lacks the capability for self-reflection and backtracking**. In Natural Plan, the model must determine a feasible multi-city route. While HTP outperforms baselines, it may struggle with constraint violations, such as selecting consecutive cities without a direct connection. Humans typically backtrack and revise their decisions in such cases, whereas HTP lacks an inherent mechanism for such adjustments. Based on the analysis, we have added **a discussion on future work**; please refer to **Weakness 5 of Reviewer zPJM**. ### Question 1 >In the "Selection" stage, would it be more effective to use training-based process reward models or other model-based evaluators? We absolutely agree that incorporating a reward-based approach could enhance decision accuracy, though it comes at the cost of some generalizability. To investigate this, **we conduct experiments on the Blocksworld task** from the PlanBench dataset, which is particularly challenging because it requires selecting the correct actions, making the **selection stage highly influential on overall performance**. We design two types of reward functions, and all methods use GPT-4o as the backbone model: - **Rule-based rewards**: Rewards explicitly defined based on predefined heuristics, providing structured guidance for action selection. - **LLM-based rewards**: Rewards generated dynamically by prompting LLMs to formulate heuristic functions, reducing the reliance on manual reward design. The rewards are further refined through an evolutionary algorithm, iteratively improving the effectiveness. Our results show that **even simple rule-based rewards can significantly improve model performance**. Moreover, the **LLM-generated heuristic rewards offer more fine-grained guidance while reducing the burden of manual reward engineering**. We believe this is a promising direction for future research. |Method|HTP|+Rule-based|+LLM-based| |-|:-:|:-:|:-:| |**Success Rate**|**54.7**|**59.7**|**71.2**| --- Rebuttal Comment 1.1: Comment: Thanks for your elaboration and complementary experiments. I have no further questions and will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful review. We deeply appreciate your constructive suggestions, which have significantly contributed to strengthening our work. Your insights on computational cost, failure case analysis, and selection module enhancement are highly valuable. We will ensure that these improvements are incorporated into the revised manuscript.
Summary: This paper concentrates on complex planning tasks, for instance, mathematical and logical reasoning. To alleviate the multiple challenges, extended reasoning steps, diverse constraints, and multiple distinct sub-tasks, they propose a HyperTree Planning (HTP) that is based on the divide-and-conquer strategy to split tasks into multiple distinct sub-tasks. Extensive experiments demonstrated the proposed method improved the accuracy of the TravelPlanner benchmark. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. The method has comparatively less proof and no potential questions. Experimental Designs Or Analyses: Yes. They develop multiple experiments and adopt multiple baselines, planning strategies, in-context learning methods, and powerful LLMs to support the effectiveness of the proposed HTP. Supplementary Material: Additional Results Relation To Broader Scientific Literature: The earliest works begin with analogical reasoning, including CoT, and ToT. Most recently, the agent systems collaborate through structured processes to combat planning tasks. The author proposes the existing limitations in complex task reasoning include concentrating on mathematical and logical reasoning that is ill-suited for planning questions, the performance depends on human-designed prompts, and the generalization across tasks hindered by human-designed intervention in autonomous agent methods. They then devise methods that combine ToT and HyperChain to solve these challenges. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths** The paper is the first work to combine hypertree structure into a reasoning process. To alleviate the fluctuation of human-designed prompts, they design an autonomous planning framework that self-guide the planning process without manual design. They conduct amount of experiments to support the effectiveness of proposed methods. **Weakness** I am interested how the ability of humans in these involved benchmarks, Travel Planner, PlanBench, and Natural Plan. Whether humans almost not make errors across multiple complexity tasks? These should provide some failure cases to analyze the limitations of current methods. Though they conduct a number of experiments to investigate the capability of proposed methods, the analysis mainly includes describing the result with explicit performance. There lack of deep analysis of why these methods improve the performance compared to related works. Furthermore, if existing methods remain a great gap between humans, why attributes to this pattern? The author should discuss how the future works utilize this work and any potential value in multiple real-world tasks. If the author provides a reasonable response, I will reconsider my ranking. Other Comments Or Suggestions: There lack of limitations and future work. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful, valuable, and positive comments. We address the concerns in detail as follows. We sincerely hope that our response could properly address your concerns. ### Weakness 1 >Do humans almost never make errors across multiple complex tasks? In **TravelPlanner**, humans can achieve **near-perfect accuracy**. For PlanBench and Natural Plan, human performance data is unavailable. We estimate that humans can achieve **near-perfect accuracy in PlanBench**, as stacking blocks aligns well with human capabilities. In **Natural Plan**, we expect an overall success rate of **approximately 80%**, with errors occurring mainly on the most difficult tasks. ### Weakness 2 & Weakness 4 >provide some failure cases to analyze the limitations of current methods We will incorporate **an analysis of bad cases** in our revised paper, as detailed below: - **LLMs struggle with complex single-step reasoning**. In TravelPlanner, given a table of candidate hotels (including hotel name, room type, etc.), the model is required to identify 'entire room' options. However, it often fails by missing correct options or selecting incorrect ones. - **LLMs lack human prior knowledge**. In TravelPlanner, humans naturally use strategies to stay within budget, such as choosing self-driving over flights, selecting the cheapest hotels and restaurants. However, models struggle to leverage such prior knowledge, resulting in **plans that exceed the budget**. - **HTP remains vulnerable to long-horizon errors**. In PlanBench, the model must track and update the environment state after each action. However, any mistake in state updating can propagate errors, leading to incorrect subsequent reasoning and actions. - **HTP lacks the capability for self-reflection and backtracking**. In Natural Plan, the model must determine a feasible multi-city route. While HTP outperforms baselines, it may struggle with constraint violations, like selecting consecutive cities without a direct connection. Humans typically backtrack and revise their decisions in such cases, whereas HTP lacks an inherent mechanism for such adjustments. >If existing methods remain a great gap between humans, why attributes to this pattern? A considerable gap remains between HTP and human-level planning, which can be mainly attributed to: - **Humans are adept at planning and adapting to complex constraints**, such as effortlessly filtering hotels based on multiple conditions. - **Humans leverage extensive prior knowledge beyond what is provided in the prompt**, which helps them make more informed, practical decisions (e.g., cost-saving strategies in travel planning). - **Humans are less prone to hallucinations**, making them more reliable for long-horizon planning. - **Humans can self-reflect and backtrack when necessary**, adjusting their decisions in the face of contradictions. ### Weakness 3 >Why HTP improve the performance compared to related works? Please refer to **Question 1 of Reviewer rT6H**. ### Weakness 5 >How the future works utilize this work and any potential value in multiple real-world tasks? We will incorporate **a discussion on future work and potential applications** in our revised paper, as detailed below: **First, HTP naturally integrates with self-reflection and backtracking techniques, making it particularly valuable for real-world tasks like meeting planning and calendar scheduling**. By replacing single-chain reasoning with a hierarchical hyperchain structure, HTP enhances reflection accuracy by **directly identifying and correcting errors without revisiting unrelated paths**. We conduct **a preliminary experiment** on the TravelPlanner dataset, using GPT-4o as the backbone model. We use a heuristic oracle to generate reflection signals and employ the simple cosine similarity metric to pinpoint specific erroneous paths, limiting reflections to 3 per plan. **Results show significant improvements with reflection, highlighting HTP's strong potential when integrated with reflection-based methods**. Future work can explore two directions: **(1) replacing the heuristic oracle with an LLM-powered self-reflection mechanism to enhance adaptability, and (2) refining the accuracy of error localization to further improve efficiency**. ||CPR (Micro)|CPR (Macro)|HCPR (Micro)|HCPR (Macro)|SR| |-|:-:|:-:|:-:|:-:|:-:| |HTP|87.2|37.8|44.3|32.2|20.0| |**HTP+reflection**|**93.6**|**66.7**|**60.7**|**48.9**|**44.4**| **Second, HTP shows great potential in autonomous agent decision-making due to its scalability and adaptability**. To further explore this, we have added **two agent experiments**; please refer to **Question 2 of Reviewer X7uh**. Future work can explore how to **empower end-to-end agents with autonomous hierarchical thinking using HTP**. **Third, combining HTP with LLM-based heuristic process reward functions is a promising direction**. We have added **an experiment on this aspect**; please refer to **Question 1 of Reviewer uus3**.
Summary: This paper introduces HyperTree Planning (HTP), a reasoning paradigm designed to improve complex planning tasks using hierarchical hypertree-structure. Claims And Evidence: The core motivation is that existing reasoning methods (e.g., CoT, ToT) struggle with long-horizon, multi-constraint planning problems, such as travel planning, which require handling interdependent sub-tasks. The proposed framework aims to overcome the limitations of intensive human interventions as in existing planning agents. Key contributions in methodology: * HyperTree Reasoning Paradigm: it adopts a hypertree structure to model the reasoning process to perform hierarchical thinking. The proposed hypertree construction algorithm aligns well with the structured nature of planning problems. * Autonomous Planning Framework: leverages task-specific planning outlines to self-guide the planning process dynamically, reducing reliance on manually crafted in-context learning examples. Methods And Evaluation Criteria: The proposed HTP framework introduced in this paper utilizes the divide-and-conquer strategy to construct and refine hypertree-based planning outlines and thus organize sub-tasks in a structured manner. It naturally models complex planning tasks. The iterative refinement strategy is also reasonable. Theoretical Claims: This is an empirical work, no theory results. Experimental Designs Or Analyses: Overall, the empirical studies show promising performance improvement on benchmark datasets compared to different baseline planning algorithms. Here are some questions: 1. While a comparison with ReAct is provided in Appendice C.2, why the authors did not compare with ReAct in the main result of Table 1? Supplementary Material: Yes. Appendices include extra experimental results and comparisons. No code submission. Relation To Broader Scientific Literature: Authors provide a hypertree-based planning framework for complex planning tasks. It can be helpful for LLM reasoning and agentic frameworks in practice. Essential References Not Discussed: Main related work has been included. Other Strengths And Weaknesses: While hypertree reasoning is conceptually appealing, its computational cost relative to existing planning strategies is unclear. It is beneficial to include further analyses in terms of efficiency metrics, e.g. inference speed, memory usage, computational complexity etc. Other Comments Or Suggestions: Please see above. Questions For Authors: 1. While I understand that HyperTree Planning differs from Tree of Thought and RAP by the fact that it constructs hypertree, I cannot fully grasp why the proposed HyperTree Planning excels existing tree-based methods in principle. Say, is it because hypertree structure can better handle planning constraints while ToT and RAP cannot etc. Can authors further comment and compare? 2. Methodologically, why HyperTree Planning can work better than ReAct? ReAct is also able to break tasks into subtasks and iteratively making decisions based on evaluation results. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful, valuable, and positive comments. We address the concerns in detail as follows. We sincerely hope that our response could properly address your concerns. ### Weaknesses 1 >It is beneficial to include further analyses in terms of efficiency metrics. We will incorporate **an analysis of computational cost** in the revised version of our paper, as detailed below. Specifically, we evaluate the computational cost of CoT, RAP, and our HTP on the TravelPlanner dataset. Since open-source models currently exhibit suboptimal performance on TravelPlanner, we adopt GPT-4o as the backbone model. Our evaluation focuses on three commonly used efficiency metrics: **inference speed, token cost, and computational complexity**. Let $n$ denote the number of branches expanded at each step ($n\leq 2$ in TravelPlanner), $l$ the average number of reasoning steps per chain, and $k$ the sampling trajectories in MCTS-based methods. |Model|Inference Speed (s)|Token Cost (in/out)|Computational Complexity| |:-:|:-:|:-:|:-:| |CoT|6.92|4328/641|$O(l)$| |RAP|41.94|5440/3374|$O(nkl)$| |**HTP**|**25.27**|**5562/963**|$O(nl)$| The results indicate that **HTP achieves significantly lower computational costs across all three metrics compared to RAP**. This is because our top-down approach eliminates the need for MCTS, reducing the associated overhead. Moreover, **HTP does not introduce additional computational complexity in hierarchical reasoning** within the hyperchain structure. While the hyperchain structure increases the dimensionality of reasoning, it simultaneously shortens the average path length, allowing the overall computational efficiency to remain at the same level. Additionally, we provide **a comparison of API costs in Table 2 of our paper** for reference. Besides the experiments, we provide **additional discussions on computational cost**; please refer to **Weakness 1 of Reviewer uus3**. ### Question 1 >Why HTP excels existing tree-based methods in principle? As illustrated in Figure 2, existing tree-based reasoning methods, such as ToT and RAP, fundamentally select a single reasoning chain from multiple candidate thought chains. **This single-chain structure inherently limits their ability to address complex, long-horizon planning tasks, as it lacks hierarchy and fails to handle multiple subtasks independently.** As a result, these methods often suffer from issues such as **unmet constraints and missing components**. In contrast, **HTP replaces the single-chain structure with hyperchains, enabling hierarchical reasoning by allowing a single edge to connect multiple nodes**. In travel planning, where about 15 different constraints should be considered, this structure enables the model to clearly identify which subtask needs to be handled along each path, thereby guiding it to **reason about relevant constraints while avoiding unnecessary ones**. Furthermore, **by explicitly decomposing subtasks across multiple layers, each sub-task is represented as an independent node from the outset**. This ensures that **HTP enforces completeness**, preventing essential sub-goals from being omitted. ### Question 2 >Why the authors did not compare with ReAct in the main result of Table 1? As mentioned in Appendix C.2, the TravelPlanner dataset consists of a two-stage mode (TS) and a sole-planning mode (SP). The TS mode retrieves target information via tool calls before reasoning (similar to ReAct), whereas the SP mode provides all information upfront, requiring the model to filter relevant details during reasoning. **As Table 1 reports baseline results under the SP mode, ReAct is not included**. To provide a clearer comparison with ReAct, we have conducted **additional experiments evaluating HTP in the TS mode**, and these results will be included in the revised version of our paper. In HTP’s Self-Guided Planning module, instead of retrieving information from context, we modified it to retrieve information via tool calls. All methods use GPT-4o as the backbone model, and the evaluation metrics remain the same as in Table 1. **The results indicate that HTP in the TS mode is comparable to its performance in the SP mode and significantly outperforms ReAct.** ||CPR (Micro)|CPR (Macro)|HCPR (Micro)|HCPR (Macro)|SR| |-|:-:|:-:|:-:|:-:|:-:| |React|79.4|8.33|7.14|5.00|1.67| |**HTP(TS)**|83.7|**33.9**|39.5|**28.9**|**18.3**| |**HTP(SP)**|87.2|**37.8**|44.3|**32.2**|**20.0**| > Why HyperTree Planning can work better than ReAct? ReAct indeed follows an iterative decision-making process and performs decomposition implicitly through step-by-step reasoning. However, similar to ToT and RAP, **ReAct still follows a single-chain reasoning structure, leading to a lack of hierarchy, unmet constraints, and missing components**. In contrast, HTP overcomes these challenges by leveraging hyperchains to enable structured, hierarchical reasoning, ensuring comprehensive and constraint-aware planning.
Summary: This paper proposes an autonomous planning framework called HyperTree Planning that involves (a) HyperTree Constrution, (b) Self-Guided Planning and (c) Plan Generation. It tackes the limitation of exisiting chain-of-thought and tree-of-thought on planning problems, for example, they focus on mathematical and logical reasoning. The main contribution comes from its HyperTree planning on top of existing Tree-of-though, with four steps: (1) Selection (2) Expansion (3) Construction and (4) Decision. Through experiments on travel planning benchmarks, it achieves state-of-the-art performanc and shows compatibility with multiple backbone LLMs. Claims And Evidence: HyperTree Reasoning shows superior performance on complex planning benchmarks, this claim could be questionable on whether it can be applied to other planning/agent tasks. Methods And Evaluation Criteria: I think HyperTree Planning shows promising performance on trip planning tasks. While it is a concern whether this framework generalize to other agent tasks such as webArena. HyperTree Reasoning Paradigm: Theoretical Claims: There is no theory in the paper. Experimental Designs Or Analyses: Yes, I read the main experiment and ablation designs. Supplementary Material: Yes. I read every page of the appendix. Relation To Broader Scientific Literature: This work might be applicable to other Agent datasets such as VisualAgent and WebArena. Essential References Not Discussed: Not a concern. Other Strengths And Weaknesses: The novelty of the paper seems to be limited as a combination of ToT and HyperTree structured proposed in (Lample et al., 2022) Other Comments Or Suggestions: N/A Questions For Authors: 1. How is the probability pruning implemented? I assume no probability is generated during the hypertree construction stage. 2. Can HyperPlanning be used in Agent tasks? Especially with the function calling capablity. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful, valuable, and positive comments. We address the concerns in detail as follows. We sincerely hope that our response could properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. ### Weakness 1 >The novelty of the paper seems to be limited as a combination of ToT and HyperTree structured proposed in (Lample et al., 2022) The novelty compared to "ToT+HyperTree": - **Different Motivation**: ToT focuses on exploring the search space through tree search strategies, addressing problems that require extensive trial and error to reach a solution. HyperTree Proof (Lample et al., 2022) is designed for an entirely different domain, aiming to find valid proof pathways by selecting appropriate proof methods. **In contrast, HTP is motivated by the key challenges in complex planning tasks**, such as **a lack of hierarchy, unmet constraints, and missing components**. We take inspiration from human problem-solving strategies and leverage the hypertree structure to structurally enable LLMs to perform hierarchical reasoning. Neither ToT nor HyperTree Proof can effectively address complex planning tasks. - **Innovative Usage of HyperTree Structure**: In HyperTree Proof, the hypertree structure is primarily used to **select the correct edges**, as each edge represents a different proof method, and the key goal is to complete the proof process by making the right selections at each layer. In contrast, HTP leverages the hypertree structure to **generate nodes**, enabling a divide-and-conquer strategy that decomposes complex planning tasks into hierarchical subtasks. **While both methods utilize hypertree structures, the modeling and usage are fundamentally different**. Therefore, HTP is not a simple adaptation or combination of existing methods. - **Explainability**: We explain **why HTP is particularly well-suited for solving complex planning tasks**, providing a solid rationale for its superior experimental performance. For more details, please refer to **Question 1 of reviewer rT6H**. - **Novel Planning Framework**: We innovatively model planning process using the hypertree structure, introducing the first hypertree-based reasoning algorithm and a corresponding fully autonomous planning framework. - **Flexibility and Scalability**: Our HTP framework is highly flexible and can be seamlessly integrated with mechanisms such as **self-reflection** and **process reward modeling**. We have conducted **supplementary experiments** on these mechanisms, as detailed in **Weakness 5 of reviewer zPJM** and **Question 1 of reviewer uus3**. This adaptability is crucial for the future development of **autonomous LLM agents**, highlighting HTP’s strong scalability and generalization capabilities. ### Question 1 >How is the probability pruning implemented? The probabilities in this context refer to **confidence scores outputted by LLMs**, a technique adopted in several recent LLM-based works like RAP (Hao et al., 2023) and Self-DC (Wang et al., 2024). Specifically, to select hyperchains, we enumerate candidates and present them to the LLM in a dictionary format. **The LLM selects a hyperchain by outputting its index along with the corresponding logit, which we exponentiate to obtain the confidence probability**. Since our hypertree structure involves multi-layer selections, the probabilities for different layers within the same hyperchain follow the multiplication rule, ensuring that the total probability sum across all hyperchains remains normalized. This approach enables us to **select the top-n hyperchains with the highest probabilities efficiently while pruning lower-probability candidates**. ### Question 2 >Can HTP be used in Agent tasks? Especially with the function calling capability. We will supplement **experiments on agent tasks** in the revised version of our paper, as detailed below. Specifically, we select two widely adopted agent benchmarks, **WebShop and WebArena**, to evaluate HTP’s capability in function calling. As baselines, we choose ReAct, Reflexion, and LATS, which represent SOTA methods without incorporating prior knowledge. To ensure consistency with the baselines, we use GPT-4o and Gemini-1.5-Pro as the backbone models for WebShop and WebArena, respectively. Both datasets are evaluated using **Success Rate (SR)** as the metric. |Method|WebShop|WebArena| |-|:-:|:-:| |React|33.2|17.9| |Reflexion|39.8|20.2| |LATS|41.0|21.0| |**HTP**|**44.2**|**23.4**| **The results show that HTP outperforms SOTA methods on both datasets, demonstrating its strong performance in function calling scenarios**. Additionally, we provide an extended experiment of **HTP’s function calling performance on the TravelPlanner dataset**; please refer to **Question 2 of Reviewer rT6H**.
null
null
null
null
null
null
Towards Understanding Fine-Tuning Mechanisms of LLMs via Circuit Analysis
Accept (poster)
Summary: This paper investigates the mechanisms of fine-tuning in LLMs through circuit analysis, focusing on mathematical tasks where pre-trained models perform poorly but improve significantly after fine-tuning. The authors identify that edge modifications in circuits (rather than node changes) drive performance gains and propose a circuit-aware LoRA method (CircuitLoRA) that dynamically allocates higher ranks to layers with greater edge changes. Experiments demonstrate improved accuracy and parameter efficiency over standard LoRA. Additionally, the paper explores compositional tasks, showing that union circuits of subtasks approximate compositional task circuits. ## update after rebuttal Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes - I'm curious if you combine the discovered circuit for the single task, whether the combined circuit can perform the compositional task like you measure the faithfulness. Supplementary Material: NA Relation To Broader Scientific Literature: - I think the empirical support for modular fine-tuning strategies, relevant to efforts like task arithmetic and model merging. - Compositional reasoning work hypothesizes that models solve complex tasks by combining subtask circuits. Essential References Not Discussed: There are some other circuits that are related to the work: Some work about the reuse or combinations are not discussed: - [1] Circuit Component Reuse Across Tasks in Transformer Language Models, ICLR 2024 The work analyzes mainly based on the math tasks, but there are some other knowledge circuits proposed, and whether the compositional is also applicable should be mentioned: - [2] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization, Neurips 2024 - [3] Knowledge Circuits in Pretrained Transformers, Neurips 2024 Other Strengths And Weaknesses: Strengths: - Insights into compositional tasks could inform strategies for complex task fine-tuning via subtask circuit unions. - Comprehensive experiments across multiple models (Pythia, GPT-Neo, OPT), tasks (arithmetic, sequences, LCM), and fine-tuning methods (LoRA variants, full fine-tuning). Weakness: - Experiments are limited to synthetic mathematical tasks. While these provide controlled settings, it is unclear if findings generalize to natural language tasks (e.g., text generation, reasoning). Other Comments Or Suggestions: NA Questions For Authors: I have some questions about the CircuitLoRA. - Since you need to substitute the critical layers for the CircutiLoRA, do you need to tune the model using the two ranks twice with different ranks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your review and helpful suggestions! These are good points, which we answer below. >Q1: I'm curious if you combine the discovered circuit for the single task, whether the combined circuit can perform the compositional task like you measure the faithfulness. Based on your suggestion, we add the faithfulness obtained by the Union Circuit on the task. We observe that the **Union Circuit achieves a faithfulness of 89.18%**, which supports our claim that Union Circuit can perform compositional task like Compositional Circuit. >Q2: There are some other circuits that are related to the work: Some work about the reuse or combinations are not discussed...The work analyzes mainly based on the math tasks, but there are some other knowledge circuits proposed, and whether the compositional is also applicable should be mentioned. We appreciate the opportunity to more clearly situate our contributions within these key works: [1] Circuit Component Reuse Across Tasks in Transformer Language Models (ICLR 2024) While this paper discusses the reuse of components, **we focus on the composability and reuse of circuits**. Our work complements this direction by studying not just reuse but also structural recomposition, showing how sub-circuits from simple arithmetic tasks can be merged to approximate more complex tasks. [2] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization (NeurIPS 2024) This paper mentions that this type of reasoning combined in implicit reasoning has something in common with our thinking. Our study builds on similar math reasoning tasks, but **focuses on how these circuits evolve during fine-tuning and how their compositionality can be leveraged to improve fine-tuning strategies like CircuitLoRA.** [3] Knowledge Circuits in Pretrained Transformers (NeurIPS 2024) We only have some overlap in circuit discovery. Moreover, **Our research focuses more on the changes in the internal circuits during fine-tuning as the model accuracy increases, as well as the improvement of the fine-tuning mechanism from the perspective of Mechanistic Interpretability.** We have explicitly cited these works in the Related Work and Discussion sections and expanded our discussion on how our findings support and extend these prior directions, especially regarding modular fine-tuning, compositionality, and general-purpose circuit reuse. >Q3: Experiments are limited to synthetic mathematical tasks. While these provide controlled settings, it is unclear if findings generalize to natural language tasks (e.g., text generation, reasoning). 1, Motivation. We consider our five tasks because many recent works in Mechanistic Interpretability are based on tasks like IOI or greater-than where pre-trained models already achieve high accuracy on these tasks (e.g., 98% for GPT-2 on IOI), which is not practical for understanding fine-tuning. So we choose to explore scenarios where models start with low performance and improve significantly after fine-tuning — allowing us to observe meaningful structural changes in circuits. 2, To follow your suggestion, we newly extended our experiments to both new mathmatical tasks and two natural language tasks. Compared to tasks of previous studies, our designed tasks are more challenging, involving more complex reasoning patterns. - Comparison Task: Is 121 > 112? Answer: - Complex IOI Task: Robert needed brush, Kelly wanted pan, Daniel handed brush to - Complex Capital-Country Task: If Abkhazia corresponds to Sukhumi, then Moldova corresponds to... The following are the specific experimental results, please see Figure1 at [this anonymous link](https://anonymous.4open.science/r/rebuttal-icml2025-r4) for additional figures. - Pre-trained model accuracies on these tasks were initially low: 46.74%, 27.60%, and 32.58% respectively. - Comparison: $ \Delta S_{edge} $ = 23.6%, $ \Delta S_{node} $ = 9.6% - Complex IOI: $ \Delta S_{edge} $ = 17.3%, $ \Delta S_{node} $ = 6.0% - Capital-Country: $ \Delta S_{edge} $ = 16.8%, $ \Delta S_{node} $ = 7.3% These results replicate the core conclusion of our main paper that edge dynamics dominate structural change during fine-tuning and confirm that our findings generalize beyond the original tasks. >Q4: I have some questions about the CircuitLoRA. Since you need to substitute the critical layers for the CircutiLoRA, do you need to tune the model using the two ranks twice with different ranks? To clarify, CircuitLoRA requires only **a single round** of fine-tuning. The training is performed once, with a unified LoRA configuration where Critical layers (identified via circuit analysis) are assigned a higher rank ($r_c$), Non-critical layers are assigned a lower rank ($r_o$). This design is one of the key strengths of CircuitLoRA — it leverages mechanistic insights to redistribute parameter budget effectively, without introducing additional training complexity or computational overhead. --- Rebuttal Comment 1.1: Comment: Dear authors, The two tuning phase I mentioned is that when you detect the critical layer, you need to tune the model first and decide which layer is critical, and then you conduct CircuitLoRA by setting different ranks. I hope I understand correctly? Thanks for your reply. Best --- Reply to Comment 1.1.1: Comment: Thank you for your reply! — yes, your understanding is correct. CircuitLoRA is a two-phase tuning strategy. **The motivation for this design is to further verify the conclusions obtained in Section 4.** To be more practical, we conduct experiments to illustrate: - Phase 1: In the first stage of identifying critical layers, we find that using LoRA with `rank=2` is sufficient, which uses significantly fewer parameters than the base LoRA setup `rank=32` . This highlights the lightweight nature of our approach. Besides, in a 4-epoch training setup, critical layers identified after just `1 epoch` were already consistent with those from the final model, indicating that full fine-tuning is not required to extract critical layers. - Phase 2: Full CircuitLoRA is then applied — we set higher ranks on the critical layers identified in phase 1, while keeping the ranks low on non-critical layers. We hope our response can address your concern!
Summary: * The paper investigates circuits in LLMs (subsets of the computational graph) that have been finetuned to complete various small mathematical tasks (e.g. add two numbers). * The paper computes circuits (using standard methods) at different stages in the finetuning process and on different data. After verifying that the circuits are faithful, the paper finds that there is less change in circuit structure as the finetuning process continues, that found circuits are generally robust to data perturbation, and that edges in a circuit change more than nodes in a circuit. * The paper then introduces a novel approach to performing parameter efficient finetuning distillation, given an already-finetuned model. The idea is to compute circuits for the task in the pre-finetuned and post-finetuned models, determine which layers contain the greatest differences in circuits (critical layers), and then perform parameter efficient finetuning with more parameters on the critical layers and fewer parmeters on other layers. The paper finds that this approach (called "CircuitLoRA") outperforms standard LoRA with the same parameter ratio, and even often outperforms LoRA with greater parameter count on many tasks. * Finally, the paper looks at a compositional task (which requires two subtasks to be solved) and finds more overlap between the union of the subtask circuits and the compositional task circuit than between circuits for unrelated tasks. ## update after rebuttal I raised my recommendation to a weak-accept. This is largely because of new results provided by the authors that demonstrate that CircuitLoRA outperforms other layer-adaptive LoRA methods, and due to responses to other reviewers that explain that CircuitLoRA does not require the model to be fully finetuned before it is applicable (in fact, the authors stated that only one epoch out of five epochs is necessary). This suggests that CircuitLoRA may be valuable in allowing for greater parameter efficiency in finetuning. Because I now believe that CircuitLoRA is a worthwhile contribution, I raised my recommendation. Claims And Evidence: * In Section 4.1, the paper states "Key Observation 1: Circuits can be identified in both pre-trained and fine-tuned models with high faithfulness and robustness, regardless of their significant performance differences." This is supported by Figure 2, which shows faithfulness levels above 80% for obtained circuits (which increase with the number of finetuning checkpoints), and circuits with high robustness scores of over 0.9. * In Section 4.2, the paper claims that circuits stabilize over the course of finetuning; this is supported by Figure 3, which shows that the number of node changes and edge changes in circuits across tasks decreases as finetuning progresses. * At the end of Section 4.2, the paper mentions "the pivotal role of edges as the primary drivers of structural adaptation during fine-tuning". This is supported by the paper finding that when normalizing by the number of edges/nodes in a circuit before finetuning, a greater proportion of edges in a circuit change over the course of finetuning than nodes (Fig. 3). While this evidence itself may be true, it is unclear how this implies that edges **drive** structural adaptation during fine-tuning, or what it would mean for edges to do such a thing. Similarly, the paper states based on this evidence "Key Observation 2: Fine-tuning performs more significant edge modifications than node modifications." But to me, the greater number of edge changes does not follow that these edge changes are **more significant** than node changes. In fact, because there are far more possible edges in a circuit than nodes, it seems reasonable to believe that a node change is more significant than an edge change. * In Section 4.3, the paper states that "added nodes are predominantly located in the middle and later layers of the circuit, whereas added and deleted edges are concentrated in the middle layers". The paper supports this with a diagram of edges and nodes that were added to/removed from the original addition-subtraction circuit over the course of finetuning (Fig. 3, left). Visually, looking at the figure, this seems true, but especially in the case of edge modifications (because there are so many edges), it feels difficult to be sure. I would recommend that the paper explicitly include a graph that plots the number of edge/node modifications per layer. * In Section 5, the paper states that "Circuits can in turn improve fine-tuning with higher accuracy and parameter efficiency across various mathematical tasks." This is well-supported: according to Table 1, the paper's novel "CircuitLoRA" method, which makes use of circuit change information over finetuning, outperforms PEFT methods with even higher parameter ratios (although see Question 2 of mine regarding some confusion I have about these parameter ratios). * In Section 6.2, the paper states that "the Union Circuit [the union of the circuits for the two subtasks in a compositional task] provides an approximate representation of the Compositional Circuit [the circuit for the compositional task]". Similarly, it then states that "The composition of the circuits can effectively represent the circuits of the compositional task". However, the primary evidence for this comes from Table 2, which shows the overlap between the top $k$ edges in the union circuit and the top $k$ edges in the compositional circuit for different values of $k$, compared to the overlaps between different circuits (as baselines). While indeed the union and compositional circuits have greater overlaps (for $k=100$, their overlap is 69 edges versus 51 edges for the Add/Sub and Mul/Div circuits), no information is given about given about the performance/faithfulness of these circuits. It is thus hard for me to say that these claims are truly supported. In order for the claim to be supported, the paper should include this faithfulness information (see Question #3 later in this review). Methods And Evaluation Criteria: * The paper uses EAP-IG for extracting faithful circuits, which is a standard, well-performing method. The faithfulness metric used in this paper is also standard and sensible. * I am a bit confused about the paper's newly-defined "robustness" metric. This metric is defined as the Jaccard similarity (intersection-over-union) of the edge set of the original circuit and the "perturbed circuit", where the "perturbed circuit" is computed by first adding noise to the dataset and then extracting the circuit for the same task on this "perturbed dataset". What doesn't make sense to me is why this notion is considered in terms of dataset noise, when it would be more principled to consider it in terms of extracting circuits from different disjoint *dataset splits*. All of the "noising" operations described in the paper actually seem to just be creating different dataset examples. Would it not make more sense to simply partition the dataset into $n$ disjoint splits, extract a circuit on each split, and then calculate the pairwise Jaccard similarities between these circuits? * The tasks that the paper investigates (addition/subtraction, multiplication/division, arithmetic/geometric sequences, least common multiples, linear function evaluation) all seem reasonable. The two-step compositional task introduced in Section 6.1 also makes sense. * The paper's newly-introduced CircuitLoRA algorithm is simple, sensible, and in keeping with the circuit-oriented focus of the paper. One possible "nice-to-have" would be to compare CircuitLoRA with another non-uniform-rank PEFT method (e.g. AdaLora, which the authors of this paper cited), to see if circuit-specific information outperforms more generalist algorithms. Even if not, it is still a "sign of life" for circuit analysis, suggesting that it is able to pick up on some real important properties of the model. Theoretical Claims: No theoretical claims were made. Experimental Designs Or Analyses: I already discussed the compositional overlap experiment from Section 6.2 in the "Claims and Evidence" section of this review. I also looked into whether learning rate was tuned separately for CircuitLoRA compared to vanilla LoRA, and happily found that it was not, thus putting the two methods on a more even playing field. Beyond this, I did not particularly investigate the validity of experiments in detail, although they all seem straightforward enough to me, and the authors mention that they split all datasets into a finetuning split and a separate circuit analysis split. Supplementary Material: I read Appendix A to see how hyperparameters were chosen for PEFT (and happily found that LoRA learning rate was chosen based on rank, rather than being tuned separately per layer or per method). I also read Appendix C to learn what the dataset noise procedure consists of. Relation To Broader Scientific Literature: This paper does a mostly good job in its related works section (Section 2.2 in particular) of contextualizing itself in terms of both the circuit analysis literature and also the literature on finetuning methods (such as PEFT methods). However, the related works section does not include [1], a paper from last year which addresses many of the same questions as this one on how circuits form throughout the training process of a large language model. I provide more specific discussion on [1] in the "Essential References Not Discussed" portion of this review; however, suffice it to say that I believe that in the context of [1], I fear that the paper under review lacks novelty in its approach and subject of analysis. [1] Tigges, C., Hanna, M., Yu, Q., and Biderman, S. LLM Circuit Analyses Are Consistent Across Training and Scale. arXiv preprint arXiv:2407.10827, 2024. Essential References Not Discussed: Possibly the greatest lacuna from the references section of this paper is [1]. Just like this paper, [1] applies EAP-IG to find circuits in models at various stages of training; it then calculates the stability of these circuits throughout training using a Jaccard similarity-based score, and finds that circuits often stabilize throughout training. This anticipates many of the claims made in the paper under review. Furthemore, [1] goes beyond merely looking at graph-level properties of the circuits under consideration, and instead analyzes the specific functional roles of different components in specific well-studied circuits, along with how they evolve over time. [1] Tigges, C., Hanna, M., Yu, Q., and Biderman, S. LLM Circuit Analyses Are Consistent Across Training and Scale. arXiv preprint arXiv:2407.10827, 2024. Other Strengths And Weaknesses: This paper is well-written, and most analyses are done in sensible way. The main reason why my recommendation for this paper is a weak-reject is that its analysis of circuit development and stability over finetuning largely is a retread of [1]'s more thorough analysis of circuit development and stability over the course of training. And while this current paper does introduce the CircuitLoRA PEFT method, I am somewhat skeptical of the utility of this method, given that it requires a model to be finetuned in the first place in order to compute circuits, and given the lack of comparison in this paper between this method and other adaptive PEFT methods. I hope that in their rebuttal, the authors of this paper will provide a persuasive explanation of how their paper differs from previous literature. If I am convinced by such an explanation, then I would be happy to raise my score. [1] Tigges, C., Hanna, M., Yu, Q., and Biderman, S. LLM Circuit Analyses Are Consistent Across Training and Scale. arXiv preprint arXiv:2407.10827, 2024. Other Comments Or Suggestions: **Minor questions** 1. In Table 1, the entire line for each of the CircuitLoRA results is bolded, suggesting that each CircuitLoRA outperforms all other methods. However, this is not true across all tasks (e.g. $r_o = 32$ LoRA beats $r_o = 8, r_c = 32$ CircuitLoRA on the Sequence task). Does the bolding represent "best performance for a given parameter ratio"? Some clarification would be helpful, especially for readers simply skimming the tables and figures. 2. For CircuitLoRA, were the critical layers found for different tasks the same? How much overlap was there? It might be the case that there are certain circuit-independent critical layers that benefit an outsized amount from LoRA finetuning. If this is true, then this would suggest that when performing LoRA finetuning in general, then those layers' adapters should have higher ranks. 3. In Algorithm 1, what is EnhancedLoRALinear? I assume that this this the same thing as a LoRALinear adapter but with a higher rank; is this true? If so, then I would recommend replacing "EnhancedLoRALinear" with "LoRALinear". **Suggestions** * It would make the figures much easier to parse and cite if they were broken up into subfigures. **Typos** * In Table 2, "Mul_Div" is written instead of "Mul/Div". Questions For Authors: 1. When perturbing the dataset to calculate robustness, if a perturbed prompt already exists in the dataset, then is it resampled? If not, then this would suggest that the actual level of perturbation is less than stated. 2. In Table 1, how do CircuitLoRA and RandomLoRA with $r_o = 32, r_c = 64$ have a lower parameter ratio (1.4248%) than the LoRA baseline with $r_o = 32$ (1.7479%)? Is this a typo, or am I not understanding something about how the CircuitLoRA algorithm works? 3. For Section 6, what is the faithfulness score of the union circuit on the compositional task? How does this change for different values of $k$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your helpful review! >Q1:The authors of this paper will provide a persuasive explanation…, then I would be happy to raise my score. We apologize for the oversight and have added the missing reference. Both works use EAP-IG as a shared tool, not a core contribution. We clarify key differences from [1] in motivation and contributions. Motivational Distinction: - [1] focuses on analyzing how circuits and their components emerge and stabilize during pretraining. - We focus on why fine-tuning improves performance, including per-layer node/edge dynamics—a finer view not explored in [1]. Unlike prior MI work, we focus on low-performing tasks to better reflect practical fine-tuning scenarios. Application Contributions Beyond [1]: - As noted in *Open Problems in Mechanistic Interpretability* (2024), MI research splits into understanding mechanisms and using MI for better predictions. Most prior work, including [1], contributes to the first category. Beyond understanding, we leverage MI insights to improve fine-tuning. CircuitLoRA shows the practical value of structural insights. - We also propose and empirically evaluate Compositional and Union Circuit. We show that merging two subtask circuits can approximate compositional circuit. These are hard to observe in [1], which focuses on pre-training. >Q2: But to me, the greater number of edge changes does not follow that these edge changes are more significant than node changes…because there are far more possible edges in a circuit than nodes… To account for this, we use a **normalized change metric $ \Delta S$**, measuring node/edge change rates relative to their initial values. The result shows edge change rates consistently exceed node change rates by **2–3x**. To distinguish natural structural growth from fine-tuning effects, we added experiments: **the difference between the estimated edge changes caused by node changes and the actual observed edge changes.** In all tasks, the actual edge changes substantially exceeded the upper bound implied by node changes — often by a factor of **2.6x to 3.1x**. This supports our conclusion. >Q3: In Table 1, how do CircuitLoRA and RandomLoRA with $r_o$=32,$r_c$=64 have a lower parameter … Does the bolding represent "best performance for a given parameter ratio"? First, there is a labeling error in Table 1. CircuitLoRA ($r_o$=32, $r_c$=64) should be CircuitLoRA ($r_o$=16, $r_c$=64). Second, Bold highlights the best method in similar parameter ranges. In the Table 1 caption, we have already specified the intended comparisons: CircuitLoRA ($r_o$=8, $r_c$=32) vs LoRA ($r_o$=16). >Q4: For Section 6, what is the faithfulness score of the union circuit on the compositional task? How does this change for different values of k? 1, We evaluated the faithfulness of the Union Circuit and found it to be 89.18%, compared to 96.86% for the Compositional Circuit. 2, We further report that the faithfulness of the union circuit across different percentage values of total edges. As Overlap is structural, we focus on the top 100–1000 scoring edges. Faithfulness evaluation requires more edges. See **table1** at [this anonymous link](https://anonymous.4open.science/r/rebuttal-icml2025-r3). The results show that a small subset of top-ranked edges is sufficient to achieve high faithfulness. >Q5: If a perturbed prompt already exists in the dataset, then is it resampled?...Would it not make more sense to simply partition the dataset into n disjoint splits...? 1, We applied a duplicate avoidance mechanism during generation (up to 100 attempts per task). 2, We also ran an experiment as suggested. The results show that circuits in **fine-tuned model score 0.73 vs. 0.84 (pre-trained model) and 0.55 (random model).** These results further confirm our original conclusion. >Q6: One possible "nice-to-have" would be to compare CircuitLoRA with another non-uniform-rank PEFT method. We conducted experiments comparing CircuitLoRA with AdaLoRA. Below are results for the two methods with similar parameters. |Method|Param Ratio|Add/Sub(300)|Mul/Div| |-|-|-|-| |AdaLoRA|1.7481%|76.70|92.75| |CircuitLoRA ($r_o$=16, $r_c$=64)|1.4248%|83.10|97.00| Please see **table2** at the above anonymous link for full results. This provides empirical support that MI insights can guide parameter-efficient fine-tuning effectively. >Q7: For CircuitLoRA, were the critical layers found for different tasks the same? How much overlap was there? In our current experiments, we analyzed the top-5 critical layers identified for each task. Please see **table3** at the above anonymous link. Critical layers vary by task, with some overlap. >Q8: I would recommend that the paper explicitly include a graph that plots the number of edge/node modifications per layer. We have added it. Please see Figure1 at the above anonymous link. >Q9: I assume that this this the same thing as a LoRALinear adapter but with a higher rank; is this true? Correct—we've adjusted accordingly. --- Rebuttal Comment 1.1: Comment: Thank you for taking the time to respond to my review. I think that the new results presented in your rebuttal, along with rebuttals to other reviewers, do strengthen the paper. As such, I will be changing my recommendation to a weak-accept. The main information that caused me to increase my score was your explanation in a reply to Reviewer Vy58 that CircuitLoRA only requires a single epoch of finetuning to identify critical layers, suggesting that the method does have immediately practical benefits, especially in light of the table that you provided comparing CircuitLoRA to AdaLoRA. I think that focusing on these practical benefits would improve the framing of the paper -- especially if the paper were to include calculations/experiments that compare total compute required versus performance for both CircuitLoRA + 1 epoch finetuning and AdaLoRA. With regard to the framing, I still am somewhat skeptical of how the in the poor-performance-task finetuning setting considered in this paper is qualitatively different from pretraining or finetuning on high-performance tasks, both of which settings have been considered in the previous literature. Hence why I think that a greater focus on the practical benefits of CircuitLoRA would be helpful. Also, one minor question: > The difference between the estimated edge changes caused by node changes and the actual observed edge changes. In all tasks, the actual edge changes substantially exceeded the upper bound implied by node changes — often by a factor of 2.6x to 3.1x How are estimated edge changes computed? And is this a lower bound (not an upper bound) or something else (e.g. expected value)? --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful feedback, for recognizing our contribution, and for raising your score. We sincerely appreciate your engagement and support for our work! The estimation is intended as a upper bound on the number of edge changes attributable to node changes. We estimate edge changes by multiplying the average number of nodes changed ($ΔN$) by the average number of edges per node ($D$) in the circuit. $$ \text{Estimated Edge Changes} = \Delta N \times D $$ The estimate assumes that each changed node affects all of its connected edges, which gives the maximum number of edge changes directly attributable to node changes. In practice, some connections are preserved by rerouting to other nodes, so the actual number of such edge changes is smaller. Since the average degree ($D$) of a node is typically greater than the number of edges that actually change per node, the overall estimate is necessarily larger.
Summary: The paper studies how fine-tuning works in LLM using the circuit analysis method. It presents a set of mathematical tasks that show clear performance improvements during fine-tuning, unlike previous work that focused on already well-performing pre-trained models. The authors find that fine-tuning mainly changes the connections (edges) in the model while keeping most of the internal similarities (nodes), which goes against the idea that fine-tuning only adds new components. Based on this, they introduce a circuit-aware Low-Rank Adaptation (LoRA) algorithm that ranks circuit layers by how much their connections change, resulting in a improvement in performance compared to standard methods. Claims And Evidence: Claim1: Circuits can be identified in both pre-trained and fine-tuned models with high faithfulness and robustness, regardless of their significant performance differences. -- Yes. The authors employ the pythia-1.4B model in section 4.1 to do the analysis. Claim2: Key Observation 1: Circuits can be identified in both pre-trained and fine-tuned models with high faithfulness and robustness, regardless of their significant performance differences. -- On different checkpoints the authors find that while node similarities remain high, there are significant edge changes that differentiate pre-trained and fine-tuned models. This indicates that circuit dynamics play a crucial role in the fine-tuning process. In Figure 3 (upper right), the authors show the plot. Actaully, I cannot agree with it. The total num of edges is much greater then total number of nodes. The ratio of changement can deliever more information. Claim3: The development of a circuit-aware LoRA method optimizes fine-tuning. Evidence: The paper describes a novel LoRA method that prioritizes circuit layers based on edge modifications. Experimental results validate this approach, indicating that circuit insights can lead to improvements in the fine-tuning effectivenes. Methods And Evaluation Criteria: In general, the proposed methods and evaluations make sense for the problem. Theoretical Claims: N/A Experimental Designs Or Analyses: Overall, the experimental design is sound. However, in Section 4 the authors use LoRA with the Pythia-1.4B model for fine-tuning, which I believe is not good. Typically, full-parameter fine-tuning is preferred. Especially, the authors use a small model 1.4B here. IIf a very large model were used, I could understand that, due to computational constraints, the authors might need to use PEFT directly. But for a 1.4B model, GPU memory constraints should not be a barrier to using full fine-tuning. This choice raises concerns about whether the subsequent findings would hold true under normal full fine-tuning conditions. Supplementary Material: Yes. Appendix A and figure 7 8. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weaknesses: 1. The study focuses on a specific set of mathematical tasks, and further research is needed to determine the generalizability of these findings. 2. For the empirical observation, consider using full fine-tuning (FT) for small models and PEFT for large models. 3. If you only want to focus on mathematical problem solving for reasoning, I suggest extending the circuit analysis from LLM + SFT to LLM + RL to enhance the contribution. Other Comments Or Suggestions: - Broaden the range of mathematical tasks to assess the generalizability of the findings. - For empirical observations, use full fine-tuning for small models and PEFT for large models. - If the focus is on mathematical problem solving and reasoning, consider extending the circuit analysis from LLM + SFT to LLM + RL to enhance the contribution. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your review and helpful suggestions! >Q1: This indicates that circuit dynamics play a crucial role…Actaully, I cannot agree with it. The total num of edges is much greater then total number of nodes. The ratio of changement can deliever more information. To account for this, we use a **normalized change metric $ \Delta S$**, which measures the change rate of nodes and edges relative to their initial quantities. This metric is introduced in Section 4 and visualized in Figure 3 (bottom right). Using this metric, we observe across all tasks that edge change rates consistently exceed node change rates, by a factor of **2–3x**. To further distinguish whether it is the natural expansion of the structure or the influence of the fine-tuning mechanism, we supplement experiments on our tasks: **the difference between the estimated edge changes caused by node changes and the actual observed edge changes.** In all tasks, the actual edge changes substantially exceeded the upper bound implied by node changes — often by a factor of **2.6x to 3.1x**. This further confirms our previous experimental conclusions. >Q2: However, in Section 4 the authors use LoRA with the Pythia-1.4B model for fine-tuning, which I believe is not good. … Consider using full fine-tuning (FT) for small models and PEFT for large models. We have conducted both LoRA and full-parameter fine-tuning (FT) experiments with the Pythia-1.4B model for comparison. The results are provided in Appendix G. We mention this in the last paragraph of section 4.3. **The reason we chose to present LoRA-based results in the main text (Section 4) is due to the logical structure of the paper:** in Section 5, we introduce CircuitLoRA. Since our CircuitLoRA is a response to LoRA-based insights from Section 4, presenting LoRA results earlier improves narrative consistency — i.e., we derive insights under LoRA, then use them to improve LoRA. >Q3: The study focuses on a specific set of mathematical tasks,...Broaden the range of mathematical tasks to assess the generalizability of the findings. 1, Motivation. We consider these tasks because many recent works in Mechanistic Interpretability are based on tasks like IOI or greater-than where pre-trained models already achieve high accuracy on these tasks (e.g., 98% for GPT-2 on IOI), which is not practical for understanding fine-tuning. So we choose to explore scenarios where models start with low performance and improve significantly after fine-tuning — allowing us to observe meaningful structural changes in circuits. 2, To follow your suggestion, we extended our experiments to both new mathmatical tasks and two natural language tasks. Compared to their tasks, our designed tasks are more challenging, involving more complex reasoning patterns. - Comparison Task: Is 121 > 112? Answer: - Complex IOI Task: Robert needed brush, Kelly wanted pan, Daniel handed brush to - Complex Capital-Country Task: If Abkhazia corresponds to Sukhumi, then Moldova corresponds to... The following are the specific experimental results, please see Figure1 at [this anonymous link](https://anonymous.4open.science/r/rebuttal-icml2025-r2) for additional figures. - Pre-trained model accuracies on these tasks were initially low: 46.74%, 27.60%, and 32.58% respectively. - Comparison: $ \Delta S_{edge} $ = 23.6%, $ \Delta S_{node} $ = 9.6% - Complex IOI: $ \Delta S_{edge} $ = 17.3%, $ \Delta S_{node} $ = 6.0% - Capital-Country: $ \Delta S_{edge} $ = 16.8%, $ \Delta S_{node} $ = 7.3% These results replicate the core conclusion of our main paper that edge dynamics dominate structural change during fine-tuning and confirm that our findings generalize beyond the original tasks. >Q4: If you only want to focus on mathematical problem solving for reasoning, I suggest extending the circuit analysis from LLM + SFT to LLM + RL to enhance the contribution. 1, In this work, we focuse on SFT primarily due to SFT is more suitable for our task and model size. For models below 30B, the effect of reinforcement learning is very poor and it needs to be applied to more difficult tasks to be meaningful. 2, In the previous question, we expanded our tasks beyond mathematical problem to include some natural language tasks. Futher, we also explore the circuits before and after reinforcement learning follow your suggestions. In the Add/Sub task, we used PPO for 10 epochs of training and compared the differences between the internal circuits of the model after reinforcement learning and before. 3, The experimental results show $ \Delta S_{edge} $ = 30.3% and $ \Delta S_{node} $ = 14.7%. Besides, the added nodes are predominantly located in the middle and later layers of the circuit, and the nodes in the shallow layers rarely change. Please see Figure2 at the same anonymous link above for additional figure. **This is basically consistent with the conclusion in our original paper, and this result enhances the generalization of our research conclusions.** --- Rebuttal Comment 1.1: Comment: Most of my concerns have been addressed. I raised my score.
Summary: The paper studies the dynamics of fine-tuning a LLM on mathematical tasks that the model initially can't perform. This is studied through the lens of circuits identified in an automated manner using edge attributions derived via integrated gradients (where the gradients are presumably derived from the ground-truth task labels). Specifically, a circuit for the task is computed in this way throughout the fine-tuning process, and these circuits are compared. The main findings of interest are: - the fine-tuning mostly re-uses early nodes in the circuit and adds new nodes in later layers - LoRA can be improved by concentrating more parameters in layers that see more change with fine-tuning - using the union of circuits for 2 tasks can be helpful as an approximation for the circuit of a task that composes these tasks. ## Update after rebuttal Reading the rebuttal has mildly updated me upwards on the merits of the paper, Claims And Evidence: - A problematic main finding advertised by the paper is: "Meanwhile, new circuits emerge after fine-tuning, with edge changes playing a more significant role in this process." (line 83), also see "Key Observation 2" (line 244, right column). - This is evaluated by measuring the ratio between (roughly speaking) new nodes/edges divided by initial node/edges, respectively. See line 267 and surrounding paragraphs for discussion. - However, this method based on ratios is **not a priori an apples-to-apples comparison** as it does not account for the different asymptotics of nodes vs edges in the graph of the model. Since edges increase quadratically in the number of nodes (because any two attention heads/MLP blocks can connect, not just ones in consecutive layers), increasing the number of nodes by e.g. a factor of $2$ will generally increase the number of edges by a factor of $4$ absent any special structure (e.g., if done randomly). As such, the observed higher fraction of edges may be an artifact of these scaling dynamics and not a fundamental property of the fine-tuning process as claimed. - it is shown that the circuit nodes in early layers largely do not change, while the fine-tuning introduces new nodes in later layers (figure 3). However, could this be simply because almost all nodes in early layers happen to already be in the circuit? This is particularly plausible seeing as the number of nodes in early layers of the circuit is roughly constant. This should be addressed. - the claims about Circuit LoRA are well-motivated, interesting, and well-supported by the experimental evidence. Methods And Evaluation Criteria: - I don't understand what the robustness metric & associated experiments bring to the story of the paper. Also, the description of how the robustness metric is calculated was not sufficiently clear. - the $\Delta_S$ metric (Line 267, left column) is dependent on the checkpoint schedule of the fine-tuning. In particular, a denser checkpoint schedule will lead to a value at least as high as a coarser schedule (unless the $\Delta s_{t\to t+1}$ quantity is not signed, but in this case the entire sum would telescope to final - initial). This is unnatural as far as the metric aims to quantify the overall change in nodes an edges in fine-tuning. - When studying the compositional task in section 6, it would be far more natural to report the performance obtained by the union circuit on the task (when e.g. the rest of the network is mean-ablated, or under other interventions; the original IOI paper provides many different ways to measure the faithfulness/completeness of a circuit), as opposed to the harder-to-interpret metric of overlap? Theoretical Claims: N/A Experimental Designs Or Analyses: - In Figure 3, the legend does not describe all the node colors - what do the various shades of grey represent? Supplementary Material: N/A Relation To Broader Scientific Literature: The paper is motivated by an important and timely question in the mechanistic interpretability literature, and closes some small gaps in our mechanistic understanding of fine-tuning. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The paper is well-written and easy to follow and understand Weaknesses: - It should be noted that even though the paper shows that LoRA can be improved using mechanistic knowledge of the network, this knowledge itself is derived by actually performing full fine-tuning on the network. In this sense, the paper does not constitute a practical improvement upon LoRA, but rather a conceptual proof of concept that post-hoc knowledge of a fine-tuned model can be funneled through some mechanistic metrics into a succinct summary that can in turn improve LoRA fine-tuning. Thus the value of the finding is conceptual rather than practical. Other Comments Or Suggestions: - the specifics of the corruption process should be explained earlier in the text, as it is central to the meaning of the robustness metric. Some questions I had while reading: - does corrupting the sign from + to - mean that we find a circuit using only addition problems and then we check how similar this circuit is to a circuit derived from subtraction problems? - what does it mean to corrupt an arithmetic/geometric sequence problem by "changing one term in the sequence"? Wouldn't the sequence fail to be an arithmetic/geometric sequence after the change? - in general, when there is a reference to the appendix, it is good to describe what the results there demonstrate; as a reminder, reviewers are not required to read the supplementary material. Questions For Authors: - how is "change rate" defined in Figure 3 (bottom right)? OK, I guess it is the thing denoted by $\Delta_S$ in the main text. Could help to clarify, it was confusing on a first read. - What is the "compositional circuit" in 6.2.? I assume it is the circuit identified for the compositional task (e.g. $(a+b)*c$) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your review and helpful suggestions! >Q1: However, this method based on ratios is not a priori an apples-to-apples comparison...the observed higher fraction of edges may be an artifact of these scaling dynamics ... To further distinguish whether it is the natural expansion of the structure or the influence of the fine-tuning mechanism, we supplement experiments on our tasks: **the difference between the estimated edge changes caused by node changes and the actual observed edge changes.** In all tasks, the actual edge changes substantially exceeded the upper bound implied by node changes — often by a factor of **2.6x to 3.1x.** This gap indicates that these additional edge changes are not just caused by nodes, but are actively adjusted during the fine-tuning process. This finding is consistent with our original conclusion. >Q2: It is shown that the circuit nodes in early layers largely do not change…simply because almost all nodes in early layers happen to already be in the circuit? Circuit node changes involve **both additions and removals**, not just additions. As shown in Figure 3 (left), a considerable number of nodes in the middle layers are present in the initial circuit but are removed after fine-tuning. This demonstrates that circuit evolution is bidirectional and the relative stability in early layers cannot be solely explained by initial saturation. >Q3: I don't understand what the robustness metric…the specifics of the corruption process..Some questions I had while reading... 1. Motivation. We consider robustness experiments because the model performs poorly on the task before fine-tuning. It is important to ensure that the circuits identified in this low-performance are still robust. Earlier studies ignored this point. Robustness is defined as the Jaccard similarity between edge sets of circuits extracted on clean vs. perturbed data for the same task. 2. Corruption strategy. We use this strategy follow the **Symmetric Token Replacement (STR)** principle—minimally altering input while preserving task type. - `+` to `–` does not mean testing addition vs. subtraction circuits. Instead, we assess whether specific edges are sensitive to small, task-relevant input changes. - Changing one term in a sequence may break the exact pattern intentionally. This tests whether circuits remain structurally stable under small semantic shifts, not whether the model still solves the task correctly. >Q4: the Δₛ metric is dependent on the checkpoint schedule… this is unnatural as far as the metric aims to quantify the overall change in nodes and edges. We intentionally designed $Δ_S$ to capture the cumulative dynamic changes of nodes and edges throughout the fine-tuning process. Specifically, certain nodes/edges may be added and later removed (or vice versa) during fine-tuning. These transient changes are meaningful and important intermediate behaviors are possibly lost if we only consider the final vs. initial states. $Δ_S$ aims to provide an approximate measure of the dynamic changes in structure evolution, which we believe is important. >Q5: It should be noted that…Thus the value of the finding is conceptual rather than practical. Our primary motivation for designing CircuitLoRA was to validate insights gained from our analysis in Section 4. We answer your doubts from two aspects of the experiment: - **Memory efficiency**: In the first stage of identifying critical layers, we find that using LoRA with `rank=2` is sufficient, which uses significantly fewer parameters than the base LoRA setup `rank=32` . This highlights the lightweight nature of our approach. - **Time efficiency**: In a 4-epoch training setup, critical layers identified after just 1 epoch were already consistent with those from the final model, indicating that full fine-tuning is not required to extract critical layers. These suggest that our method offering a degree of practical applicability alongside the conceptual value. >Q6: When studying the compositional task..as opposed to the harder-to-interpret metric of overlap? In this paper, we chose to use Overlap as a **structural metric** . To explore the faithfulness of the union circuit, we conducted additional experiments on the compositional task. We evaluated the faithfulness of the Union Circuit and found it to be 89.18%, compared to 96.86% for the Compositional Circuit. This faithfulness result, together with the structural overlap, supports our claim in paper. >Q7: In Figure 3, the legend does not describe all the node colors - what do the various shades of grey represent? The various shades of grey in the figure indicate the total degree of each node in the final circuit. Darker grey represents nodes with higher connectivity. >Q8: how is "change rate" defined in Figure 3 (bottom right)?...What is the "compositional circuit" in 6.2.? We sincerely apologize for the confusion caused during a first read. Your understanding of these two concepts is completely correct.
null
null
null
null
null
null
A Dynamical Systems-Inspired Pruning Strategy for Addressing Oversmoothing in Graph Attention Networks
Accept (poster)
Summary: The paper provides a new controlling method to steer away node embeddings falling into oversmoothing state during propagation compared with existing method such as G2-gating. This controlling method specifically target for graph attention based method with the gradual pruning highly correlated connection trick to increase the spectral gap and avoid oversmoothing. The paper provides a comprehensive theoretical analysis from spectral analysis for the fixed point to show the largest impact of oversmoothing is spectral gap and therefore propose to prune highly corrected connection to increase such gap and avoid trapping into oversmoothing state. Empirically, three dataset are used to verify the performance of the proposed method with existing baselines, showing competitive performance and methods alleviate oversmoothing. Claims And Evidence: The claims is supported with comprehensive theoretical analysis. However, the empirical analysis only contains three real world datasets, which is less sufficient to provide comprehensive evidence of the proposed method. Methods And Evaluation Criteria: The evaluation metric makes sense. Theoretical Claims: The theoretical claim is roughly checked and the conclusion should be correct as there exist several studies who also support such claims (for example, spectral gap). Experimental Designs Or Analyses: The soundness of the experimental designs is fine. But the dataset covered are not sufficient with only three real world datasets and none are large-scaled, hard to show its effectiveness, especially when it claims to be much efficient than G2-gating. Supplementary Material: Both theoretical part and experimental part are reviewed. Relation To Broader Scientific Literature: The contribution of the paper is to further push the view of deep graph learning as a dynamic system design in the graph learning community. Essential References Not Discussed: In the view of the dynamic system, extending the work of G2-gating from Rusch, [Jin & Zhu, 2024] has shown that a learned metric instead of fixed Dirichlet energy could also preserve Dirichlet energy and alleviate oversmoothing with competitive performance. [1] Y. Jin and X. Zhu, "Graph Rhythm Network: Beyond Energy Modeling for Deep Graph Neural Networks," 2024 IEEE International Conference on Data Mining (ICDM). Other Strengths And Weaknesses: The strength of the paper is it has comprehensive theoretical analysis and show strong evidence alleviating oversmoothing. The weakness of the paper is that it only has three medium scale real-world dataset covered in the experiments, making it less sufficient to show the performance for large-scale dataset with its efficiency claim. Other Comments Or Suggestions: None. Questions For Authors: I observe that in the supplement, the pseudo code is provided, the output describes the output embedding is presented after pruning =0, what does this mean in general? How does it relate to the termination condition? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and are pleased that our theoretical analysis and overall approach were well received. Below, we address the main concerns: 1. **Limited Empirical Evaluation on Real-World Datasets** We agree that evaluating on larger-scale datasets is important. To that end, we extended our experiments to include OGB benchmarks (ogbn-arxiv and ogbn-products) as well as LRGB datasets. The additional results are summarized below: | Dataset | Model | Accuracy (%) | GFLOPS | Accuracy/GFLOPS | | ----- | ----- | ----- | ----- | ----- | | ogbn-arxiv | GCN | 71.9 | 12.5 | 5.75 | | ogbn-arxiv | G2GAT | **72.5** | 10.3 | 7.04 | | ogbn-arxiv | **DYNAMO-GAT** | 72.1 | **6.7** | **10.76** | | ogbn-products | G2GAT | 73.9 | 22.1 | 3.34 | | ogbn-products | **DYNAMO-GAT** | **75.3** | **14.5** | **5.19** | These results not only confirm that DYNAMO-GAT scales to larger graphs but also highlight its computational efficiency and improved trade-off between accuracy and complexity compared to baseline methods. 2. **Relation to Broader Literature and Essential References** We appreciate the suggestion to discuss the work by Jin and Zhu (2024) on Graph Rhythm Networks, which employs a learned metric to preserve Dirichlet energy. In our revised manuscript, we will include a detailed discussion comparing our dynamic, noise-driven pruning approach to learned-metric techniques. This will emphasize that our method complements these approaches by directly addressing oversmoothing via enhanced spectral gap control. 3. **Clarification on Pseudocode – “Output After Pruning \= 0”** We thank the reviewer for highlighting the ambiguity in the pseudocode. The phrase “after pruning \= 0” was a formatting artifact and does not imply a termination condition. Specifically: * The intended meaning is that the final node representations are produced after the pruning process has been applied at all layers. * There is no condition under which the pruning stops (e.g., when the pruning ratio reaches zero). Instead, the pruning rate evolves as r(t)=r0⋅(1+γt)r(t) \= r\_0 \\cdot (1 \+ \\gamma t) and is applied continuously throughout the network. 4. We have revised the pseudocode to remove the misleading “= 0” fragment. The corrected line now reads: **Output:** Final node representations after pruning. We hope these clarifications, along with our extended empirical evaluation and updated discussion of related work, address the reviewer’s concerns. We believe that these improvements strengthen our manuscript and further validate the proposed method’s effectiveness and scalability.
Summary: This paper introduces a refreshing perspective on the over-smoothing behavior of graph neural networks through the lenses of dynamical systems. After the establishment of the theoretical framework, a novel architecture which dynamically prunes the attention weights has been proposed and achieved state-of-the-art performance on some benchmark datasets. Claims And Evidence: The claims are supported theoretically and experimentally. Methods And Evaluation Criteria: Yes. Although more thorough benchmark using larger and more realistic graph datasets is encouraged. Theoretical Claims: Yes. They seems sensible to me. Experimental Designs Or Analyses: Yes. They look reasonable, although I would also compare the over-smoothing behavior with the rich literature of anti-over-smoothing methods, both stochastic and residual. Supplementary Material: The detailed proofs read sensible to me. Relation To Broader Scientific Literature: This paper would be highly relevant to the graph modeling community. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: Would you consider testing this method on physical/chemical datasets where over-smoothing is well known to have caused many problems? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive assessment and insightful suggestions. Below, we address the key points raised: 1. **Evaluation on Larger and More Realistic Datasets:** As noted in our response to Reviewer kxJk (see the table provided therein), we have extended our evaluation to larger-scale benchmarks such as ogbn-arxiv and ogbn-products. These additional experiments reinforce that DYNAMO-GAT not only scales effectively but also achieves a superior accuracy-to-complexity trade-off compared to baseline methods. We will ensure these extended results are clearly presented in the final manuscript. 2. **Comparison with Alternative Anti-Oversmoothing Methods:** We appreciate the suggestion to compare our approach with stochastic and residual anti-oversmoothing strategies. While our current experiments primarily focus on demonstrating the benefits of dynamic, noise-driven pruning in terms of increasing the spectral gap and preserving node diversity, we will add a discussion in the revised manuscript that contextualizes our method within the broader literature. This discussion will highlight the unique advantages of our approach relative to other strategies. 3. **Testing on Physical/Chemical Datasets:** The idea of evaluating DYNAMO-GAT on physical/chemical datasets is very interesting and represents a promising direction for future work. Oversmoothing is indeed a significant challenge in these domains, and we believe our method could offer substantial benefits. We plan to explore this avenue in our future studies. We thank the reviewer again for the constructive feedback and believe that these additions and clarifications will further strengthen our work.
Summary: The paper presents a dynamical systems take on GNNs and proposes dynamically pruning edges based on learnt attention weights in GAT to combat oversmoothing. ## update after rebuttal: I thank the authors for their detailed response to my questions and their effort in providing further experiments. While results on larger heterophilic datasets in [1] would add further value to the paper, I believe that the additional discussion and experiments in the rebuttal are sufficient for the paper to be of value to the community, particularly the method used for learning sparse attention patterns. Therefore, I have raised my score. Claims And Evidence: The authors supports their theoretical claims with proofs and empirical evidence from experiments on synthetic data. Methods And Evaluation Criteria: While the theoretical claims are supported by empirical evidence on synthetic datasets, evaluation on real-world datasets is rather weak in my opinion. While the predictive accuracy achieved by DYNAMO-GAT is very similar to its main competitor i.e. G2GAT, it is shown to be computationally more efficient with a better trade-off between accuracy and cost. However, this is shown on three small datasets. If the main advantage of DYNAMO-GAT over G2GAT is lower computational cost, it should be evaluated on larger datasets such as on OGB datasets, LRGB datasets that may benefit from deeper models, and also larger benchmark heterophilic datasets in [1] that could also potentially benefit more from DYNAMO-GAT as the authors show on synthetic data. [1] Platonov et al. A critical look at the evaluation of GNNs under heterophily: Are we really making progress? Theoretical Claims: The theoretical claims seem correct but the mathematical details were not checked in detail. Experimental Designs Or Analyses: See methods and evaluation criteria section of review. Supplementary Material: The experimental section in the appendices was reviewed. Relation To Broader Scientific Literature: The paper makes an interesting connection between dynamical systems and GNNs via theoretical analysis to address the widely-studied oversmoothing problem in GNNs. While other solutions such as structural modifications for oversmoothing are mentioned, a discussion of the proposed pruning-based method in the context of graph sparsification particularly to tackle oversmoothing and for efficiency gains is missing. Essential References Not Discussed: None that I can recall. Other Strengths And Weaknesses: Strengths: The paper presents an effective combination of existing techniques rooted in theoretical analysis. Weaknesses: The tradeoff between accuracy and computational cost that is seemingly the main advantage of DYNAMO-GAT over G2GAT is evaluated weakly on small-scale datasets whereas it would truly matter to observe a similar trend on larger graphs, as discussed above in the method and evaluation citeria section of the review. Other Comments Or Suggestions: While the two key metrics discussed in the paper. i.e. oveersmoothing measure and accuracy, are reported in the experiments, it may also be of value to report the pruning ratio, i.e. the number of edges pruned during training. This could reveal information on the importance of the structural information in the input graph. Furthermore, empirically analyzing the correlation of node features between which edges are removed would further verify theoretical insights. Questions For Authors: 1. Does DYNAMO-GAT cater to inductive settings for node classification as well? Is it also effective for graph classification tasks? 2. Is there any additional hyperparameter tuning required to train DYNAMO-GAT? 3. While attention coefficients learning a value of 0 effectively prunes edges, [2] analyzes the attention mechanism of GAT and shows that it suffers from trainability issues that obstructs the coefficients from learning this value during training. Could the authors comment on this, perhaps from a dynamical systems perspective? [2] Mustafa et al. GATE: How to Keep Out Intrusive Neighbors Edit: While results on larger heterophilic datasets in [1] would add further value to the paper, I believe that the additional discussion and experiments provided in the rebuttal are sufficient for the paper to be of value to the community, particularly the method used for learning sparse attention patterns. Therefore, I have raised my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Weakness 1: Limited evaluation on small-scale real-world datasets** As discussed in our response to Reviewer kxJk (see the table provided therein), we have extended our experiments to larger-scale benchmarks such as ogbn-arxiv and ogbn-products. These additional results confirm that DYNAMO-GAT not only scales effectively—with a significantly lower GFLOPS count—but also offers a superior accuracy/GFLOPS trade-off relative to baselines. **Weakness 2: Missing discussion on connection to graph sparsification** We appreciate this valuable suggestion. While traditional graph sparsification techniques (e.g., spectral sparsification and effective resistance methods) primarily focus on reducing redundancy to enhance computational efficiency, our pruning strategy differs in its dynamical-systems-inspired rationale. Specifically, our method targets edges based on noise-driven covariance analysis, directly addressing the oversmoothing issue by pruning structurally redundant edges linked to highly correlated node features. We will thoroughly clarify these distinctions and similarities in our revised manuscript. **Weakness 3: Lack of pruning ratio metrics and structural insights** We thank the reviewer for this valuable suggestion. Following your recommendation, we performed additional experiments analyzing both the **pruning ratio** (fraction of edges removed) and the **cosine similarity** of node features for retained versus pruned edges. Specifically, we computed the cosine similarity directly between the corresponding node embeddings at the final layer before the pruning decisions. **Empirical Results (Cora & Citeseer datasets):** | Dataset | Pruning Ratio | Cosine Sim. (Retained Edges) | Cosine Sim. (Pruned Edges) | | ----- | ----- | ----- | ----- | | Cora | 18.3% | 0.81 | 0.52 | | Citeseer | 15.7% | 0.78 | 0.48 | These results indicate that edges connecting nodes with lower feature similarity (lower cosine similarity) are preferentially pruned, empirically supporting our theoretical analysis of the role of structural redundancy in oversmoothing. **Question 1: Inductive node classification and graph classification applicability** We thank the reviewer for highlighting this aspect. To confirm DYNAMO-GAT’s suitability for inductive scenarios, we performed additional experiments on inductive benchmarks from OGB (**ogbn-arxiv**, **ogbn-products**). DYNAMO-GAT achieves competitive or superior inductive accuracy with significantly greater computational efficiency (up to 55% improvement in Accuracy/GFLOPS), directly addressing the reviewer's question. We will further explore graph-level tasks in future work. **Question 2: Additional hyperparameter tuning requirements** DYNAMO-GAT introduces two hyperparameters: the noise level (σ) and the pruning threshold adaptation parameter (β). Our extensive hyperparameter sensitivity analysis demonstrates robust model performance across a wide range of values: | Noise Level σ | Threshold β | Accuracy (%) | | ----- | ----- | ----- | | 0.01 | 0.5 | 82.9 | | 0.05 | 1.0 |83.5| | 0.1 | 2.0 | 83.2 | Model accuracy remains consistently stable within ±0.6% across these ranges, significantly simplifying the hyperparameter tuning process required for deployment. **Question 3: Addressing GAT attention mechanism trainability issues (Mustafa et al. \[2\])** Mustafa et al. (2021) show that GAT attention coefficients struggle to effectively learn zero-valued weights due to trainability issues inherent in the standard attention mechanism. This aligns well with our dynamical systems perspective, where attention coefficients that fail to reach zero prevent the pruning of edges and thus contribute to oversmoothing. From a dynamical systems viewpoint (as detailed in our paper), GAT's convergence to fixed points (low-dimensional attractors) can cause node representations to become homogeneous. Mustafa et al. illustrate a similar phenomenon, noting the attention coefficients' tendency to remain away from zero, effectively retaining redundant edges and promoting oversmoothing. Our method, DYNAMO-GAT, explicitly addresses this issue by incorporating noise-driven covariance analysis and Anti-Hebbian pruning. Dynamically adjusting the network's attractor landscape during training allows for attention coefficients to more readily achieve near-zero values, thus effectively pruning irrelevant edges. This dynamically adaptive strategy mitigates the trainability issues described by Mustafa et al. by actively perturbing the system away from stable attractors that prevent the learning of sparse attention patterns. Consequently, our approach not only overcomes the inherent training difficulty but also preserves distinct attractor states, directly targeting the fundamental cause of oversmoothing from a dynamical systems perspective. We will clarify this connection in the revised manuscript, highlighting explicitly how DYNAMO-GAT addresses the trainability limitations analyzed by Mustafa et al. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response to my questions and their effort in providing further experiments. While results on larger heterophilic datasets in [1] would add further value to the paper, I believe that the additional discussion and experiments in the rebuttal are sufficient for the paper to be of value to the community, particularly the method used for learning sparse attention patterns. Therefore, I have raised my score.
Summary: The paper introduces **DYNAMO-GAT**, a **pruning strategy** for **Graph Attention Networks (GATs)** to mitigate **oversmoothing** using a **dynamical systems perspective**. The authors propose: 1. **Noise-driven covariance analysis** to detect oversmoothing. 2. **Anti-Hebbian learning** to selectively prune attention weights. 3. **A theoretical framework** linking oversmoothing to attractor dynamics in GNNs. 4. **Experimental validation** showing **improved performance and efficiency** across standard benchmark datasets. The approach shifts from architectural modifications (e.g., skip connections, normalization) to **dynamically altering attention weights** based on system stability analysis. Claims And Evidence: ### **Supported Claims** 1. **Oversmoothing in GNNs can be analyzed via dynamical systems.** The paper provides **theoretical proofs** on **fixed points and spectral analysis**, explaining how oversmoothing occurs. 2. **DYNAMO-GAT effectively mitigates oversmoothing.** Empirical results show **higher accuracy** across deeper networks compared to standard **GCN, GAT, and G2GAT**. The **oversmoothing coefficient (µ(X)) remains stable**, indicating better node representation preservation. 3. **Pruning reduces computational cost without degrading performance.** The model achieves **higher accuracy-to-GFLOPS ratios** than existing baselines, showing **efficiency improvements**. ### **Potentially Overstated or Weakly Supported Claims** 1. **Theoretical guarantees ensure robustness across all graph structures.** The theory **mainly focuses on spectral gap analysis**; however, performance may still degrade in **highly heterophilic or dense graphs**. 2. **DYNAMO-GAT generalizes to all attention-based GNNs.** The method is **only tested on standard GATs**. Its applicability to **Transformer-like GNNs or hierarchical architectures** remains uncertain. Methods And Evaluation Criteria: ### **Strengths** - **Dynamical systems perspective** provides a new theoretical view on oversmoothing. - **Effective pruning strategy** improves both expressiveness and efficiency. - **Empirical validation** across multiple datasets confirms effectiveness. ### **Areas for Improvement** - **No runtime analysis** – How does pruning impact training/inference time? - **Limited discussion on pruning thresholds** – How are optimal pruning rates determined? - **Ablation study missing** – How does each component (e.g., noise injection vs. pruning) contribute separately? Theoretical Claims: ### **Strengths** - Provides a **mathematical formulation** of oversmoothing using **eigenvalue properties** of GATs. - Shows how **pruning increases spectral gaps** and **prevents feature collapse**. ### **Limitations** - **Fixed-point analysis assumes ideal conditions** – Real-world GNNs might not satisfy all assumptions. - **No analysis of pruning-induced instability** – Could pruning negatively affect long-term model robustness? Experimental Designs Or Analyses: ### **Strengths** - **Uses benchmark datasets (Cora, Citeseer, Cornell) for fair comparisons.** - **Compares against competitive baselines (GAT, GCN, G2GAT).** - **Analyzes oversmoothing using both accuracy and feature diversity metrics.** ### **Weaknesses** - **No failure case analysis** – When does DYNAMO-GAT fail? - **Scalability not tested** – How does it perform on larger graphs? - **Impact on node classification under different homophily levels is not explored.** Supplementary Material: Yes all parts. Relation To Broader Scientific Literature: ### **New Contributions** **Introduces dynamical systems principles** to analyze **GAT oversmoothing**. **Proposes noise-driven pruning** for dynamically adjusting attention weights. **Provides theoretical guarantees** for pruning’s impact on network expressiveness. ### **Missing Comparisons** **No evaluation on deeper Transformers** (e.g., **Graphormer, SAN**). **Limited discussion on energy efficiency trade-offs** (e.g., pruning vs. FLOP savings). Essential References Not Discussed: NO Other Strengths And Weaknesses: ### **Strengths** - **Combines theory and practice effectively.** - **Proposes a well-motivated pruning strategy with strong empirical results.** - **Demonstrates practical impact on deep GNN scalability.** ### **Weaknesses** - **No real-world deployment discussion** – How would this method work in industry applications? - **Impact of pruning on interpretability is unclear** – Does removing attention weights affect explainability? Other Comments Or Suggestions: Please refer to the above section Questions For Authors: Please refer to the above section Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: 1. **Performance on Heterophilic/Dense Graphs:** Our theory—based on spectral gap & fixed-point stability—does not assume homophily or sparsity, generalizing across diverse graph types. * Empirical validation: * Fig. 3 (Syn-Products): DYNAMO-GAT (DGAT) DGAT performs well as edge density increases, showing resilience to OverSmoothing (OS) * As noted in our reply to Reviewer kxJk, we test on OGBN-Arxiv (homophilic) &OGBN-Products (heterophilic); DGAT performs robustly across both, confirming robustness across varying homophily & density levels. 2. **Generality to Other Architectures:** While our main experiments focus on standard GATs, the core mechanism—noise-driven covariance-based pruning is attention-agnostic and applies to broader attention-based GNNs. * In Transformer-style GNNs, evolving attention layers can lead to OS—our dynamical framework models this & supports adaptive mitigation. * In hierarchical GNNs, aggregation levels map to time scales; our method identifies feature mixing points and enables multi-scale pruning. * To show generalization, we applied DYNAMO pruning to Graphormer & SAN on Cora & Citeseer: |Model|Dataset|Acc.(%)|OS Coeff|GFLOPs| |--|--|--|--|--| |Graphormer (base)|Cora|83.92|0.51|2.1| |DYNAMO-Graphormer|Cora|85.13| 0.64|1.41| |SAN (base)|Citeseer|80.23|0.47|2.35| |DYNAMO-SAN|Citeseer|81.74|0.59|1.52| 3. **Runtime Analysis:** We report GFLOPS as a proxy for computational complexity. While actual training & inference times depend on hardware, we introduce a little training overhead (due to covariance computation) while substantially reducing inference complexity. 4. **Pruning Thresholds:** DGAT uses ***adaptive thresholds*** based on feature covariance (Eq. 40), removing the need for fixed pruning rates. This adaptivity aligns pruning with evolving node representations, minimizing hyperparameter tuning. 5. **Ablation:** We ablated each core component—noise injection, covariance pruning, adaptive thresholds, gradual pruning, attention recalibration. Results show that all parts are critical: removing any causes performance drops and increased OS. |Model|Acc.% (Cora)|Acc.% (Citeseer)|OS Coeff. (Cora)|OS Coeff. (Citeseer)| |--|--|--|--|--| |Full DGAT|83.21|82.01|0.57|0.62| |–Noise Injection (σ=0)|81.54|80.26|0.45|0.52| |–Covariance-based Pruning|79.32|77.15|0.31|0.36| |–Adaptive Thresholding (fixed threshold)|80.67|79.52|0.38|0.41| |–Gradual Pruning (aggressive pruning)|80.14|78.93|0.34|0.39| |–Attention Recalibration|79.78|78.41|0.35|0.40| 6. **Theoretical Assumptions & Failure Cases:** We agree that theoretical guarantees often rest on simplifying assumptions. Our analysis assumes Lipschitz continuity and bounded attention weights—standard in GNN theory—and we empirically validate its predictions (e.g., spectral gap improvement, robustness to OS) across diverse datasets and structural regimes. Potential failure modes reflected in our experiments: * Low-degree graphs: In Fig. 3, performance drops when node degrees are very low—pruning can limit already sparse message passing. * Excessive pruning: Our ablation includes “Gradual Pruning” variant, simulating aggressive, non-adaptive pruning. This leads to significant accuracy drops & increased OS, confirming that premature pruning can remove essential edges. We plan to explore mitigation strategies like pruning warm-up & degree-aware regularization in future work. 7. **Stability:** Our stability analysis (Lemmas 4 & 5) shows covariance-based pruning reduces the Jacobian’s spectral radius, thereby enhancing long-term model robustness rather than causing instability. Experimental observations suport these predictions. 8. **Scalability:** As detailed in our response to Reviewer kxJk, we have evaluated DGAT on larger-scale benchmarks (e.g., ogbn-arxiv & ogbn-products). These results demonstrate that our method scales efficiently to graphs with hundreds of thousands of nodes while maintaining a superior accuracy/GFLOPS trade-off. 9. **Varying Homophily Levels:** Fig. 3 shows consistent performance across a wide homophily spectrum, including low-homophily regimes. 10. **Energy Efficiency:** By pruning low-value attention edges, DGAT significantly reduces inference FLOPs, translating into lower computational & energy costs. Although the training overhead is slightly increased due to covariance computations, the overall efficiency gains are substantial. 11. **Real-World Deployment:** DGAT is well-suited for real-world applications—such as recommendation systems, fraud detection, or molecular modeling—where OS limits model depth. Its reduced inference cost & compatibility with standard practices like incremental pruning, retraining, & recalibration make it practical for deployment in compute-constrained environments. 12. **Interpretability:** DGAT prunes edges with low feature covariance (often low attention weights), potentially enhancing the interpretability of the resulting attention maps by reducing noise.
null
null
null
null
null
null
Multi-Domain Graph Foundation Models: Robust Knowledge Transfer via Topology Alignment
Accept (poster)
Summary: This paper proposes the Multi-Domain Graph Foundation Model (MDGFM) to address the challenge of transferring knowledge across graphs from different domains. MDGFM aligns graph topologies through a decoupled embedding mechanism, a graph structure learning module, and a prompt-tuning approach. This alignment allows MDGFM to effectively transfer knowledge from multiple source domains to a target domain, even for unseen domains. Theoretical analyses and experiments on both homophilic and heterophilic graph datasets validate the robustness and efficacy of MDGFM. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, the methodology and evaluation make sense to me. Theoretical Claims: Yes, I checked them. Experimental Designs Or Analyses: Yes, the experiments are sound to me. Supplementary Material: Yes, I reviewed all supplementary materials. Relation To Broader Scientific Literature: Existing graph models, such as graph neural networks (GNNs), heavily depend on labeled data, which is often scarce and costly. This paper proposes an effective and robust unified graph foundation model, which performs well on graphs in different domains. Essential References Not Discussed: There is a new publication, which is highly related to this submission. The authors should take a look. SAMGPT: Text-free Graph Foundation Model for Multi-domain Pre-training and Cross-domain Adaptation, WWW 2025. Other Strengths And Weaknesses: Strengths The paper is well-structured and easy to follow, with a clear presentation of the methodology and results. The authors provide a solid theoretical foundation for MDGFM, including proofs of its effectiveness and domain generalization capabilities. The proposed MDGFM shows robust performance across various datasets, including both homophilic and heterophilic graphs. Weaknesses The authors should provide more detailed explanations of the experimental results, particularly why certain methods outperform others in specific scenarios. The authors should include more large-scale datasets to validate the scalability of the proposed method. Other Comments Or Suggestions: The font for k is different in the caption and Fig.5. Questions For Authors: Could you provide more insights into why MDGFM performs better on certain homophilic and heterophilic datasets? How does MDGFM handle imbalanced datasets or noisy data? Have you considered applying MDGFM to dynamic graphs or temporal datasets? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your thoughtful feedback. Your constructive criticism is invaluable in refining our work. Below, we give point-by-point responses to your comments. **Weakness 1 & Question 1: Further explanations** Thank you for raising this important point. We agree that a clearer explanation of MDGFM’s performance across different datasets enhances the completeness of our work. To address it comprehensively, we now analyze the performance from two perspectives: task setting (one-shot vs. multi-shot transfer) and graph homophily (homophilic vs. heterophilic). (i) One-shot vs. Multi-shot **In one-shot settings**, the target domain has few labeled data, so adaptation heavily relies on domain alignment and structural generalization. Thus, methods like GCOPE and MDGPT which pretrain from multiple domains get better results than supervised or graph prompting methods. Specifically, MDGFM outperforms all baselines due to its knowledge transferring capacity. It effectively captures both domain-specific information and shared patterns across domains. **In multi-shot settings**, baselines like GraphCL and GPF (even GCN) occasionally catch up or outperform multi-domain pre-training methods. This is because they leverage target supervision more directly, and in high-label regimes, their implicit overfitting can yield short-term gain. However, MDGFM remains competitive and often more stable under cross-domain generalization due to invariant learning. (ii) Homophilic vs. Heterophilic Graphs **On homophilic graphs**, traditional GNNs like GCN or GAT may perform decently due to strong neighborhood-label consistency. However, MDGFM still shows advantages in low-label settings due to cross-domain transferability via decoupled embedding and prompt regularization. **On heterophilic graphs, MDGFM significantly outperforms almost all baselines.** This is because standard message passing methods suffer from abundant noise resided in topology structure, while our framework applies GSL to learn invariant knowledge robustly, learning domain-aligned graphs that reduce harmful interference. **Weakness 2: Additional experiments on large-scale datasets** We sincerely appreciate your suggestion to include large-scale evaluations. In response, we carefully selected and added **three additional large-scale datasets** with 30K+ nodes (i.e., Github, Deezer and T-Finance), covering diverse graph domains and scales, to thoroughly validate the scalability and generalization ability of MDGFM. Experimental settings are same as previous setups. New results along with the original Penn94 evaluation are summarized in **https://anonymous.4open.science/r/Large-scale-datasets-35EE**. It shows that MDGFM still outperform all baseline methods and shows great scalability. **Question 2: Additional experiments on imbalanced datasets and noisy data** Thanks for your concern. We would like to clarify that our paper includes robustness analysis under multi-type noisy conditions in Section 6.5. Expansion of robustness analysis could be seen in Appendix C.1. Specifically, we simulate noise by randomly adding or deleting edges, even conducting meta-attack. Results demonstrate that MDGFM consistently outperforms baselines under all scenarios, which confirms its robustness to structural noise. In response to the reviewer's suggestion, we conduct **new experiments on imbalanced data** using a real-world financial graph dataset T-Finance, which exhibits high class imbalance with skewed label distributions (minority-majority ratio up to 0.048:1). We compare MDGFM with GCOPE and MDGPT under same training setups using ACC/F1/AUC metrics. The results, summarized in **https://anonymous.4open.science/r/Imbalanced-Noisy-data-05ED**, show that MDGFM achieves significantly better performance confirming its effectiveness under imbalanced label regimes. We believe these results, together with the original noisy-graph experiments, provide a comprehensive view of its robustness. **Question 3: Generalization to Dynamic or Temporal Graphs** Currently, MDGFM is designed for static graphs. However, due to its modular and decoupled structure, it can be extended to dynamic settings in future work. Specifically the prompt and structure learning modules can be adapted to handle temporal snapshots. We appreciate this insightful suggestion and will discuss temporal extensions in the Conclusion section. Additionally, thank you for pointing out the relevant work SAMGPT(WWW 2025), which is the published version of citation [Yu et al., 2024]. We also acknowledge the difference in the font of "k", and will correct it in the final version. Once again, we thank the reviewer for the constructive suggestions. With the added experiments, domain analyses, and scalability discussion, we believe MDGFM has strong theoretical grounding, broad practical utility, and extensibility toward future challenges. We hope these improvements merit a stronger overall recommendation. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response—it has resolved my concerns. I would like to recommend the acceptance of this paper.
Summary: The authors propose MDGFM to solve the graph pre-training issue. The key contributions include: A novel framework that aligns graph topologies across multiple domains using Graph Structure Learning (GSL);An adaptive embedding mechanism that balances features and topologies for improved generalization; A dual-prompt tuning approach that enhances adaptation to unseen domains. Extensive experiments validate the model’s effectiveness. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: The problem of multi-domain generalization in graph learning is important, and the paper presents a timely solution inspired by the success of foundation models in NLP and CV. Essential References Not Discussed: The problem of multi-domain generalization in graph learning is important, and the paper presents a timely solution inspired by the success of foundation models in NLP and CV. Other Strengths And Weaknesses: Strengths: The problem of domain generalization in graph learning is essential, and the paper presents a timely solution inspired by the success of foundation models in NLP and CV. The topology alignment mechanism and graph structure refinement are well-motivated, addressing key challenges in cross-domain graph learning. The paper evaluates adversarial robustness and domain sensitivity, showing the model’s resilience to noise and distribution shifts. Weaknesses: The following aspects could further strengthen it. 1. The computation complexity is unclear. Given the increasing size of real-world graphs, a complexity analysis would improve clarity. 2. The paper emphasizes the role of meta-prompts and specific prompts, but it does not extensively analyze their individual contributions. Other Comments Or Suggestions: See Strengths And Weaknesses. Questions For Authors: Q1: How are the hyperparameters selected, such as the dimensionality of the feature space and the number of neighbors in kNN? Q2: How does MDGFM perform with and without prompt tuning? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive and constructive feedback. We greatly appreciate your recognition of the importance of the problem, the motivation of our proposed components, and the comprehensive empirical evaluation. Below we address your valuable suggestions and questions. **Weakness 1: Computational complexity** Thanks for your concern. While our method introduces additional modules such as GSL and prompt-tuning, we have carefully designed MDGFM to remain computationally efficient and scalable to real-world graphs. Following your advice, we provide a detailed analysis of the computational complexity of MDGFM for both the pre-training and downstream phases. In the **pre-training phase**, each of the $N$ source graphs is processed independently. For a graph with $|V|$ nodes and $|E|$ edges (here we select $|V|=max\\{V_n\\},|E|=max\\{E_m\\},n,m \in [1,N]$), the model first aligns node features via truncated PCA, which reduces the input dimension from $d'$ to $d$ at a cost of $\mathcal{O}(|V| \cdot d \cdot \log d')$. As for token lightweight element-wise multiplication, the time complexity is $|V|d$. It then applies locality-sensitive hashing $k$NN for graph structure learning. Denote the batch size of sparse $k$NN as $B$, each requiring $\mathcal{O}(d)$ operations, resulting in $\mathcal{O}(|V| \cdot B \cdot d)$ time for reconstructing the graph. Next, an $L$-layer GCN operates on the refined structure , contributing an additional $\mathcal{O}(L\cdot |V|\cdot d^2+L \cdot |E| \cdot d+|V|d)$. Therefore, the total pre-training complexity across all source graphs is $\mathcal{O}\left(N \cdot \left[ |V| \cdot d \cdot \log d' + |V| \cdot B \cdot d +L\cdot |V|\cdot d^2+ L \cdot |E| \cdot d \right]\right)$. Similarly, in the **downstream phase**, PCA procedure takes $\mathcal{O}(|V_T|\cdot d \cdot log d^{\prime})$ time. Prompt fusion and token modulation cost $\mathcal{O}(|V_T| \cdot d)$. GSL again uses local sensitive $k$NN, which adds $\mathcal{O}(|V_T| \cdot B \cdot d)$. The GCN encoder then performs $L$-layer message passing with cost $\mathcal{O}(L\cdot |V_T|\cdot d^2+L \cdot |E_T| \cdot d+|V_T|d)$. Finally, classification is done via prototype matching, where each node compares with $C$ class centroids, yielding $\mathcal{O}(|V_T| \cdot C \cdot d)$. Summing these terms, the overall downstream complexity is $\mathcal{O}(|V_T| \cdot d \cdot \log d'+|V_T| \cdot B \cdot d + L \cdot |E_T| \cdot d +L\cdot |V_T|\cdot d^2+ |V_T| \cdot C \cdot d)$. Overall, the model scales linearly with the number of nodes and edges, and benefits from efficient structure refinement and modular design like local sensitive $k$NN. We will include this complexity analysis in the final version for completeness. **Weakness 2 & Question 2** Thank you for highlighting this important point. Following your suggestions, we conduct **new ablation studies** and include the results in **https://anonymous.4open.science/r/Ablation-study-on-prompts-FD62**. Specifically, we compare four variants of the model: Full MDGFM (with both meta- and specific prompts), w/o meta-prompt (only using target-specific prompt), w/o specific prompt (only using global meta-prompt), and w/o both prompts (i.e., no prompt tuning at all). The results show that: 1. Removing the specific prompt leads to a noticeable performance drop, especially in domains with strong local structural patterns, confirming its role in target adaptation. Removing the meta-prompt also hurts performance, particularly in low-shot settings, indicating that it captures generalizable cross-domain knowledge. Note that it is normal that meta-prompt causes relatively slight impact than specific prompt. 2. Obviously, removing both results in degradation, confirming that the two prompts are complementary and essential for effective transfer. These findings support our design choice of dual-prompt tuning and clarify their respective impacts. **Question 1: Hyperparameters** We appreciate the reviewer’s attention to experimental details. For feature projection, we apply PCA to reduce the dimensionality of all node features to **$d = 50$**. This value is chosen based on empirical studies and balances expressiveness with computational efficiency. For $k$NN graph construction, we use different $k$ values depending on the structural characteristics of the graph: For homophilic graphs we use $k = 30$ to keep more original relations between nodes. As for heterophilic graphs, we use a smaller $k = 15$ to avoid amplifying noisy connections. We have also provided a sensitivity analysis (Appendix C), which shows that MDGFM remains robust across a range of $k$ values. To further demonstrate the robustness of our model to different hyperparameter choices, we additionally conduct a **sensitivity analysis** on the feature dimension $d$, and compare the results against GCOPE and MDGPT. The experimental results are summarized in **https://anonymous.4open.science/r/Sensitivity-on-d-6510**.
Summary: The authors propose a unified approach that aligns graph topologies and features across domains, leveraging Graph Structure Learning (GSL) to refine noisy and adversarial-prone real-world graphs. The framework also introduces an efficient prompt-tuning mechanism to enhance knowledge transfer to unseen domains. Claims And Evidence: Well supported. Methods And Evaluation Criteria: It is convincing. Theoretical Claims: Correct. Experimental Designs Or Analyses: It is convincing. Supplementary Material: Yes. Relation To Broader Scientific Literature: Current graph models often struggle with generalization due to challenges such as graph heterogeneity and scarcity of domain-specific data. Creating robust and adaptable graph foundation models is the next big thing for practical applications. Essential References Not Discussed: Quite complete. Other Strengths And Weaknesses: 1. The paper is well-structured and presents a valuable contribution. It addresses a critical gap in the field of graph representation learning by focusing on topology alignment across diverse domains. 2. The introduction of domain tokens and shared tokens for semantic alignment is innovative and effectively bridges the gap between domains with varying structural and feature characteristics. 3. The experimental evaluation is comprehensive. The results demonstrate consistent improvements over state-of-the-art baselines in both one-shot and few-shot learning scenarios. 4. Some intuitive explanations could be given. For example, the paper introduces several new components (e.g., domain tokens, shared tokens, balance tokens, dual prompts) that may make it not accessible to a broader audience. Other Comments Or Suggestions: 1."facilitate robust knowledge transfer." → Consider "facilitate effective and robust knowledge transfer." for clarity. 2.The error bound theorem is strong, but consider discussing potential limitations or assumptions Questions For Authors: Refer to the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the highly encouraging and constructive feedback. We are especially grateful for your recognition of our model’s generalization capability, methodological contributions, and comprehensive evaluations. Below we address your valuable suggestions. **Weakness: Intuitive explanations of model components** We sincerely thank the reviewer for pointing out the potential accessibility issue due to the introduction of multiple novel components. We agree that intuitive explanations will improve the clarity of our method, especially for broader audiences not specialized in cross-domain graph learning. Below, we provide a brief intuitive summary of each key component: **Domain Tokens** ($t_{D_i}$): Each domain token acts like a value vector in a Transformer model, storing domain-specific knowledge during pretraining. During downstream phase, the target domain serves as a query to selectively retrieve and apply relevant knowledge from the source domains. This design enables flexible and efficient cross-domain transfer through implicit attention-like behavior. **Shared Token** ($t_S$): This acts like a "semantic anchor" shared across all domains. It helps extract and preserve invariant patterns that reside in multiple domains, enabling better cross-domain alignment. **Balance Token ($t_{B_i}$)**: This component adaptively balances the contribution of node features and graph topology. Intuitively, it acts as a "tuner" that decides how much structural information to retain versus how much feature content to emphasize, especially helpful when structural noise or heterophily is present. **Dual Prompts** (meta-prompt and specific-prompt): These serve as "adapters" during downstream transfer. Meta-prompt transfers generalized knowledge learned from source domains, while the specific prompt fine-tunes the model to the unique structure and features of the target domain. In the revised version, we will incorporate these intuitive explanations into the methodology section (Sec. 4) to improve accessibility without sacrificing technical depth. **Comment 1: "facilitate robust knowledge transfer." → Consider "facilitate effective and robust knowledge transfer." for clarity.** We appreciate the suggestion. We will revise the corresponding phrase in the refined version. **Comment 2: Limitations and assumptions of error bound theorem** Thank you for highlighting this. Our theoretical results (Theorems 5.1 and 5.3) rely on the covariate shift assumption and existence of invariant subgraphs across domains. In the revised paper, we will explicitly enumerate the following limitations: 1. The error bound assumes that the target distribution lies within (or close to) the convex hull of the source domains. In highly diverse or outlier domains, this assumption may be violated. 2. One potential limitation of our theoretical framework lies in the assumption of the existence of a universal invariant graph learner $\Phi^*$. This assumption requires that core semantic and structural patterns are preserved across domains after graph structure learning (GSL). However, in real-world scenarios where the relationship between topology and features varies significantly across domains—e.g., in one domain, structure dominates label prediction, while in another, node features are more informative—such shared invariances may not naturally exist. In these cases, identifying a single $\Phi^*$ that captures consistent and transferable structural knowledge across all domains becomes highly non-trivial. We acknowledge this as a theoretical boundary condition and note that our empirical results suggest MDGFM remains effective even when this assumption is only approximately satisfied. Despite these theoretical assumptions, we observe in our ablation and robustness experiments (Sections 6.3–6.5) that MDGFM maintains strong performance even under domain removal and adversarial perturbation, which empirically validates the practical soundness of the theoretical setup. Once again, we sincerely appreciate your time and effort in reviewing our paper. Your constructive criticism has been invaluable in refining our work, and we are more than happy to add clarifications to address any additional recommendations and reviews from you!
null
null
null
null
null
null
null
null
Rank-One Modified Value Iteration
Accept (poster)
Summary: The authors propose a novel algorithm for solving planning and learning problems of Markov decision processes. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, to some extent. Experimental Designs Or Analyses: Yes, to some extent. Supplementary Material: No. Relation To Broader Scientific Literature: The main novelty is about the theoretical guarantees of their work. Essential References Not Discussed: No. Other Strengths And Weaknesses: None. The writing is clear! Other Comments Or Suggestions: None. Questions For Authors: Page 2: Please explain Equation (2). Page 2: Item (3): Can you show any theoretical guarantees here as well? Page 2: Please say a couple of words about the discount factor $\gamma$. Page 3: What is the output of the algorithm that contains Equation (4)? Page 4: Algorithm 1: Line 5: How is this step implemented? Page 6: Numerical simulations: Can you please provide further info on the implementations and the architectures that were used? Page 7: Thank you for describing limitations :) Are there any other future research directions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the comments/questions. Here is our response: __R1. Equation (2):__ The first line of eq. (2) is the definition of the value $v^{\pi}$ of a control policy $\pi:\mathcal{S}\rightarrow\mathcal{A}$ as the expected, discounted cost endured by following policy $\pi$: $$ v^{\pi}(s) := \mathbb{E}\_{s\_{t+1}\sim P^{\pi}(s\_t,\cdot)}\left[\sum\_{t=0}^{\infty}\gamma^t c\big(s\_t, \pi(s\_t)\big)\middle| s_0 = s\right],\quad \forall s \in \mathcal{S}. $$ The second equality in eq. (2) then follows from the definition above: $ v^{\pi}$ is the fixed-point of the Bellman consistency operator [4, Thm. 6.1.1], that is, $$ v^{\pi}(s) = \[T^{\pi}(v)\](s) := c\big(s,\pi(s)\big))+ \gamma \mathbb{E}_{s^+\sim P^{\pi}(s,\cdot)} \left[ v^{\pi}(s^+) \right] ,\quad \forall s \in \mathcal{S}. $$ __R2. Theoretical guarantee for item (3):__ After the submission of this work, we were able to identify the reason for the observed speed-up in the proposed algorithms. Our recent theoretical developments show that the speed-up happens only for _aperiodic and irreducible_ MDPs during the _policy evaluation_ step. Our numerical simulations with a GridWorld environment with an _absorbing state_ (so that the MDP is not irreducible) confirm this; please see Figure 3 (link below) for the result of numerical simulations in which the existence of an absorbing state leads to R1-VI having the same performance as VI. We will update the manuscript to include this discussion. __R3. Discount factor:__ The discount factor can be seen as a trade-off parameter between short- and long-term costs. That is, by choosing $\gamma$ to be closer to one, we are putting more emphasis on the costs of steps that are further away in the future. __R4. Algorithm of eq. (4):__ The VI algorithm of eq. (4) outputs a value function $v\_k$ upon termination, from which one can generate the greedy policy $\pi^{v\_k}$ according to eq. (6). For example, if the termination condition is based on the Bellman error, that is, $\Vert T(v\_k) -v\_k\Vert\_{\infty} \leq \epsilon$, the greedy policy is guaranteed to be $\rho$-optimal (i.e., $\Vert v^{\pi^{v\_k}} – v^{\star} \Vert\_{\infty} \leq \rho$), where $\rho = 2\gamma\epsilon/(1-\gamma)$. __R5. Step (5) of Algorithm 1:__ This step is as follows: $$ \[T(v\_k)\](s), a\_k \leftarrow \min\_{ a \in\mathcal{A} } \left\[ c(s,a) + \gamma \sum\_{s^+ \in \mathcal{S}} P(s^+ | s,a) v\_k(s^+) \right\], \quad \forall s\in\mathcal{S}, $$ with $\[T(v\_k)\](s)$ being the optimal value and $ a\_k $ being an optimal solution of the minimization problem. We note that $\mathcal{A}$ is finite, and the minimization is solved via enumeration over $\mathcal{A}$. __R6. Numerical simulations:__ All the numerical simulations are with finite state-action MDPs borrowed from the related literature. We note the Graph MDP is relatively small and has 18 state-action pairs, while the randomly generated Garnet MDPs are relatively large and have 1000 state-action pairs. For the proposed R1-VI and R1-Ql algorithm, we implemented exactly the pseudocodes provided in Algorithm 1 and Algorithm 2, respectively. The update rules of all other tested algorithms are provided in Appendix B.1. All the codes are also made available (link below). We would be happy to provide more details if further clarification is required. __R7. Future research directions:__ We think the most interesting future research directions are the ones that address three limitations we discussed in Section 6. Actually, we are currently working on these limitations. We believe we already have the answer to the first limitation, i.e., the reason behind the empirically observed speed-up in the convergence rate for a particular class of MDPs. The next interesting research direction is the _asynchronous_ implementation of the proposed algorithms. We are currently working on the convergence proof of the standard asynchronous implementation of the proposed algorithms with $O(|\mathcal{S}|\times|\mathcal{A}|)$ computational complexity. However, we believe the real challenge (that has not been addressed in similar works such as Zap-QL) is in the development of an asynchronous implementation with $ O (|\mathcal{A}|)$ computational complexity, i.e., similar to the asynchronous QL algorithm. The most interesting future research direction is the combination of the proposed algorithm with function approximation for handling MDPs with continuous state-action spaces. Finally, one of the reviewers brought the average cost setting to our attention and the possibility of extending the proposed algorithm to that setting. We believe this is also an interesting future research direction. __Figures__ Figure 3: https://anonymous.4open.science/api/repo/r1vi_icml-2F14/file/rebuttal/reducible_maze.pdf?v=40f97d29 __Codes__ https://anonymous.4open.science/r/r1vi_icml-2F14/README.md __References__ [4] Puterman. Markov Decision Processes. 2005.
Summary: This paper proposes a rank-one modified value iteration algorithm which is a modified policy iteration which approximates the transition dynamics by a rank-one update. The authors prove formal convergence guarantees and demonstrate the algorithm's empirical potential through numerical simulations. Claims And Evidence: In general yes; however some claims feel a bit handwavy. Could the authors be more specific about what they mean when they say the PI algorithm has a local quadratic rate of convergence? My understanding is that PI empirically converges in fewer iterations; but the worst case iteration bound can be bad. Is this right? Is the assumption of Lemma 3.2 a bit restrictive? In general, the chain may not be reducible and aperiodic, and such assumptions don't seem necessary to solve the MDP for \gamma < 1, which is the setting discussed in the prelims. Could the authors explain how the results would work for other MDPs that might not satisfy these assumptions? Methods And Evaluation Criteria: The experiments seem reasonable, but it would be interesting to see improvements on more real-world datasets. Theoretical Claims: I skimmed the proofs in the appendix. Experimental Designs Or Analyses: The experimental analysis seems reasonable. Supplementary Material: I skimmed the proofs in the supplement. Relation To Broader Scientific Literature: The paper proposes a variant of value iteration that seems to interpolate between value and policy iteration, and the approach is interesting. Essential References Not Discussed: I am unaware of any similar works thta are not discovered. Other Strengths And Weaknesses: The paper seems to be generally well-written overall, and the numerical experiments seem positive. One possible weakness is that the method might not immediately generalize to infinite state-spaces and function approximation analogs, and the experiments seem to be on somewhat synthetic datasets. Other Comments Or Suggestions: The discussion around eq. (15) is a bit confusing, and I had to re-read a few times to understand. It would perhaps be clearer to use approximate equals in the derivation to avoid confusion. Questions For Authors: Please see questions mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the comments/questions. Here is our response: __R1. Convergence of PI:__ The reviewer is right. In the worst case, the convergence of PI in the value space is similar to VI, which is linear with rate $\gamma$ [4, Thm. 6.4.6]. However, under certain conditions, this convergence can be quadratic [4, Thm. 6.4.8]. The required condition is particularly satisfied when the greedy policy $\pi\_{v\_k}$ is optimal. This means that at least locally, i.e., when $v\_k$ is in some neighbourhood of optimal value function $v_{\star}$, the convergence is quadratic. Let us also note that the greedy policy $\pi\_{v\_k}$ generated by PI is guaranteed to be the optimal policy after a _finite_ number of iterations, which is polynomial in the size of the state action spaces and the effective horizon $1/(1-\gamma)$ [5]. __R2. On periodicity and irreducibility:__ The assumption of Lemma 3.2 (i.e., periodicity and irreducibility) is indeed _not_ required for the proposed algorithms to converge. However, they are critical for observing an improved rate of convergence. We have now a better understanding of why this is the case. Our recent theoretical developments show that the speed-up happens only for _aperiodic and irreducible_ chains during the _policy evaluation_ step. Our numerical simulations with a GridWorld environment with an _absorbing state_ (so that the MDP is not irreducible) confirm this; please see Figure 3 (link below) for the result of numerical simulations in which the existence of an absorbing state leads to R1-VI having the same performance as VI. We will update the manuscript to include this discussion. __R3. On numerical experiments:__ The examples we used for our numerical simulations borrowed from similar studies that focused on developing new algorithms for finite state-action MDPs; see, e.g., (Devraj et al., 2019), (Goyal et al., 2022), and (Kolarijani et al., 2023). As the reviewer also noticed, to handle challenging real-world applications, we must extend the proposed algorithms to continuous state-action MDPs. As we mentioned in Section 6 of our paper, this extension is one of the future research directions and we are working on. In this regard, we note that an interesting property of the proposed algorithms is that the extra term in the update rule requires the expectation of the Bellman error/temporal difference with respect to the stationary distribution. This property can be useful in extending the proposed algorithms for handling continuous state-action MDPs. __R4. Confusion about the discussion around eq. (15):__ We thank the reviewer for the suggestion. We will update the corresponding part as you suggested to make it clear. __Figures__ Figure 3: https://anonymous.4open.science/api/repo/r1vi_icml-2F14/file/rebuttal/reducible_maze.pdf?v=40f97d29 __References__ [4] Puterman. Markov Decision Processes. 2005. [5] Ye.The simplex and policy-iteration methods are strongly polynomial for the Markov decision problem with a fixed discount rate. Mathematics of Operations Research. 2011.
Summary: In this paper, the authors have proposed algorithms for planning and learning in MDP based problems. The proposed algorithms use a rank-one approximation of the transition probability matrix in the policy evaluation step. It uses the stationary distribution of the transition probability matrix, approximated using power method. Theoretical convergence is provided for both the algorithms. It is proved that the convergence rates and computational complexities are same as those of the value iteration algorithm in the planning setting and Q learning in the learning stetting. Experimental results are provided to support the theoretical findings and faster convergence achieved. ## update after rebuttal The response addresses most of my concerns. I have raised my score to 4. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes. There is one concern regarding why median number of iterations are considered to describe the convergence behavior. I wonder what would be the performance of this approximation in a setting where the transition probability matrix is irreducible and aperiodic but some of the transition probabilities are very small. Supplementary Material: Yes Relation To Broader Scientific Literature: The proposed algorithms use a rank-one approximation of the transition probability matrix in the policy evaluation step. Theoretical convergence is provided for both the algorithms. It is proved that the convergence rates and computational complexities are same as those of the value iteration algorithm in the planning setting and Q learning in the learning stetting. Experimental results are provided to support the theoretical findings and faster convergence achieved. Essential References Not Discussed: No Other Strengths And Weaknesses: Overall, the paper proposes a simple yet elegant idea for a modified value iteration algorithm that provides fast convergence. The paper is well-written and easy to follow. The analysis appears solid and experimental results support the theory well. However, there are some concerns related to some experimental settings and some explanations. Other Comments Or Suggestions: Typo: MPDs->MDPs (section A.1) Questions For Authors: 1. Can these algorithms be extended for the average cost settings? Will the approximation using power method behave differently in such a setting? 2. Some intuitions need to be provided regarding why the best rank one approximation works, even though single iteration of power method is performed in a single step. I wonder what would be the performance of this approximation in a setting where the transition probability matrix is irreducible and aperiodic but some of the transition probabilities are very small. 3. The remark regarding the exploitation of sparse structure of matrix to derive the time complexity of R1-QL needs more clarity. 4. Why median number of iterations are considered to describe the convergence behavior? What happens if we switch to mean number of iterations? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the comments/questions. Here is our response: __R1.__ Yes. The proposed algorithms can be extended to the average cost setting. For example, consider the PI algorithm that uses relative VI for policy evaluation for unchains proposed in [4, Sec. 8.6.1]. We have managed to characterize that algorithm via the following update rule in the value space: For a fixed $s\_0\in\mathcal{S}$, $$ v\_{k+1} = v\_k + \big((I-P\_k)(I – e\_{s\_0} e\_{s\_0}^{\top})+e e\_{s\_0}^{\top}\big)^{-1} (T(v\_k) – v\_k), \quad v\_{k+1}( s\_0) = 0.$$ Above, $ P\_k $ is again the transition matrix of greedy policy w.r.t. $v\_k$, $e$ is the all-one vector, $ e\_{s\_0}$ is the ${s\_0}$-th unit vector, $T$ is the undiscounted Bellman operator. Now, observe that $$ G\_k = \big((I-P\_k)(I – e\_{s\_0} e\_{s\_0}^{\top})+e e\_{s\_0}^{\top}\big)^{-1} = \big(I-P\_k + (p\_k - e\_{s\_0} +e) e\_{s\_0}^{\top}\big)^{-1} = \big(I - e d\_k^{\top} + (p\_k - e\_{s\_0}+e) e\_{s\_0}^{\top}\big)^{-1}, $$ where $p\_k = P\_k(\cdot, s\_0) $ is the $ s\_0$-th column of $ P\_k $ and we used the approximation $ P\_k \approx e d\_k^{\top}$ in the last equality. The matrix inversion can then be handled efficiently using the Woodbury formula. However, the convergence of this algorithm and any possible improvement in the convergence rate when $d\_k$ is approximated via the power method requires further investigation. We really appreciate the reviewer bringing this to our attention; we will consider and add the average cost case to the future research direction. __R2.__ The reason is that in the VI algorithm and, similarly, in the proposed R1-VI algorithm, the greedy policy $\pi\_{v\_k}$ usually stays the same over multiple iterations of $v\_k$. This translates to the transition matrix $P\_k$ being the same over multiple iterations $k$; hence, the R1-VI algorithm effectively performs multiple steps of the power iteration. To show this effect, in Figure 4 (link below), we report the performance of R1-VI with more than one step of power iteration in each iteration $k$ for the same numerical simulations as in Section 5 of the paper. As can be seen, there is a minimal improvement in the performance of R1-VI. Regarding the second point on the transition probability matrix, we would like to clarify that in randomly generated Garnet MDPs in our numerical simulations, the branching parameter is set such that more than 90 percent of components of the matrices $P\_k$ are zero. However, the reviewer is correct in the sense that aperiodicity and irreducibility of underlying MDP is critical for observing an improved rate of convergence (although not required for the convergence of the proposed algorithms). Our numerical simulations with a GridWorld environment with an _absorbing state_ (so that the MDP is not irreducible and the stationary distribution is a one-hot vector) confirm this; please see Figure 3 (link below) for the result of numerical simulations in which the existence of an absorbing state leads to R1-VI having the same performance as VI. We will update the manuscript to include this discussion. __R3.__ Recall the line (10) of R1-Ql Algorithm 2: $$f = (1-\lambda\_k) \widehat{d}\_{k-1} + \lambda\_k F\_k^{\top} \widehat{d}\_{k-1}.$$ Observe that each row $(s,a)$ of the matrix $F\_k \in \mathbb{R}^{|\mathcal{S} \times \mathcal{A}| \times |\mathcal{S} \times \mathcal{A}|}$ is a unit row vector with exactly one entry equal to one corresponding to the column $(s^+,a^+)$ given by $$ s^+ = \hat{s}\_k^+ \sim \mathbb{P}(\cdot \mid s,a),\quad a^+ = \hat{a}\_k^+ \in \text{arg} \min_{a\in\mathcal{A}} q\_k(\hat{s}^+\_k,a). $$ This means that the $(s,a)$ component of the matrix-vector multiplication $F\_k^{\top} \widehat{d}\_{k-1}$ is simply $\widehat{d}\_{k-1}(s^+,a^+)$. That is, computing $F\_k^{\top} \widehat{d}\_{k-1}$ requires $O(|\mathcal{S} \times \mathcal{A}|)$, as opposed to $O(|\mathcal{S} \times \mathcal{A}|^2)$. __R4.__ We reported the median since it is more robust compared to the mean. Figure 5 (link below) reports the mean number of iterations for the same numerical simulations as in Section 5. As can be seen, the results are almost the same to the ones reported in Figs 1 and t of the submitted manuscript. __Figures__ Figure 3: https://anonymous.4open.science/api/repo/r1vi_icml-2F14/file/rebuttal/reducible_maze.pdf?v=40f97d29 Figure 4: https://anonymous.4open.science/api/repo/r1vi_icml-2F14/file/rebuttal/power_iterations.pdf?v=c98470fc Figure 5: https://anonymous.4open.science/api/repo/r1vi_icml-2F14/file/rebuttal/mean_values.pdf?v=53d762f3 __References__ [4] Puterman. Markov Decision Processes. 2005. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. All my queries are answered satisfactorily.
Summary: The paper introduces an accelerated value iteration algorithm by using a rank-one approximation of the transition dynamics in policy iteration. They further extend the algorithm to RL and introduce R1-QL. It is theoretically shown that these algorithms converge at least as fast as their conventional counterparts. Empirically, it is shown that the algorithms enjoy a faster convergence rate. Claims And Evidence: The theoretical results only show the algorithms have at least the same convergence rate as the conventional VI. The claim of acceleration is only shown through experiments. The experiments, however, are limited to synthetic artificial environments. A more natural environment like a maze could better show​ the viability of the algorithm in real environments. For example the conditions on aperiodicity and irreducibility in Lemma 3.2may be critical to observe the acceleration. Methods And Evaluation Criteria: My first main issue with the paper is that it measures the algorithms' performance in the value function error. This is not the standard metric for the control problem in RL. The real metric of interest is the return of the policy obtained by the algorithms. This is very critical in this case as it can be proven that the improvements introduced by the algorithm to value function accuracy, does not translate to better policies. At all iterations, the value function is provably just a constant factor different from the value function obtained with VI. Therefore, the sequence of greedy policies will remain the same as VI. Theoretical Claims: The results appear correct to me. Experimental Designs Or Analyses: See "Methods And Evaluation Criteria" Supplementary Material: I haven't closely read the supplementary material. Relation To Broader Scientific Literature: The problem studied by the paper is very significant and interesting. The paper does a good job with the literature review except for two key papers discussed in the next section. Essential References Not Discussed: There are two very related papers that need to be discussed. R1VI is a special case of OS-VI [1] when the approximate dynamics $\hat P$ is chosen to be the rank-one approximation. I also think R1VI is very similar if not identical to Rank-1 DDVI [2]. DDVI also uses the rank-one approximated dynamics, its value function is always a constant factor different from VI, and proves acceleration when the stationary distribution is unique (largest eigenvalue of dynamics has no repetition). I appreciate the perspective the paper provides in arriving at R1VI very unique and potentially more generalizable than the above papers. Some more discussions may help the paper. [1] Operator Splitting Value Iteration. NeurIPS 2022 [2] Deflated Dynamics Value Iteration. Arxiv Other Strengths And Weaknesses: I highly appreciate the ideas and perspectives the paper provides. The paper is well-written and easy to read, and addresses an important problem. For weaknesses, see the missing references and evaluation sections. Other Comments Or Suggestions: None Questions For Authors: Please see weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the comments/questions. Here is our response: __R1. Missing references:__ We thank the reviewer for bringing these papers to our attention; we were not aware of them. We will update the manuscript to include them and the following discussion: Ref. [1] considers the matrix splitting method for solving the _policy evaluation_ problem, i.e., the linear equation $v^{\pi} = (I-\gamma P^{\pi})^{-1} c^{\pi}$, using some given or estimated “cheap-to-access” $\hat{P}^{\pi}$ instead of $P^{\pi}$. The method requires computing/estimating another cost function $\bar{c}^{\pi}_v =c^{\pi} +\gamma (P^{\pi} - \hat{P}^{\pi}) v$. We also focus on computational complexity in the development of our algorithm. However, we are proposing a specific rank-one approximation $\hat{P}^{\pi}$ based on stationary distribution in the PI update rule, which leads to a completely different iterative algorithm. There are more similarities between our work and Ref. [2]. In particular, they suggest using a rank-$s$ approximation $\hat{P}^{\pi}$ based on the first $s$ eigenvalues (in magnitude) of $P^{\pi}$ in the matrix splitting scheme of Ref. [1]. However, they only focus on _policy evaluation_ problem. In particular, the rank-one DDVI proposed for the _control (i.e., policy optimization) problem_ uses a _fixed_ approximation $\hat{P}^{\pi}$ over the entire iterations of the algorithm. Actually, the improvement provided in [2, Thm. 4.1] is only local and happens when the greedy policy is already optimal. Moreover, we would like to point out that there are two reasons why we did not consider higher order (i.e., $\geq 2$) approximations based on the eigenvalues of $P^{\pi}$: (1) the left eigenvector corresponding to the first eigenvalue has certain properties which allow for its estimation directly from the samples without constructing an estimate for $P^{\pi}$ which is already exploited in R1-QL ; (2) it restricts the extension of the proposed algorithm to continuous state-action MPDs. Finally, regarding the existing similarities, as mentioned, we were unaware of [2] till now. Actually, an earlier version of our manuscript, including these similarities, was submitted to Neurips 2024 on 22 May 2024, prior to the arXiv date of [2] (15 July 2024). To preserve anonymity, we cannot share our OpenReview submission details here but would be happy to provide them to the (senior) AC or other conference program members if needed. __R2. Evaluation metric:__ The reviewer is correct. Indeed, the greedy policies generated by R1-VI are the same as the counterpart VI in each iteration. In this regard, we would like to point out the following: First, despite being a poor proxy for policy evaluation [3], the Bellman error is the most widely used metric because of its computational efficiency – there is no extra cost in using the Bellman error as the termination condition. With that in mind, R1-VI leads to a faster termination of the algorithm for a given performance bound for the greedy policy. Second, the mismatch between convergence in value space and policy space also arises in other "accelerated" VI/QL algorithms. In Figure 1 (link below), we report the value of the greedy policies over the iterations of the algorithms of Section 5 of our paper. As can be seen, Anderson/Nesterov -VI and Speedy/Zap-QL also suffer from the same limitation. Third, the asynchronous implementation of R1-QL algorithms leads to different policies than QL. As shown in Figure 2 (link below), there is also an improvement in the performance of the greedy policy. As we have pointed out in Section 6 of our paper, we are currently working on theoretical guarantees for the convergence of this asynchronous algorithm. __R3. Claims and evidence:__ The assumption of Lemma 3.2, although not required for the convergence of the proposed algorithms, is indeed critical for observing an improved rate of convergence. We have now a better understanding of why this is the case. Our recent theoretical developments show that the speed-up happens only for _aperiodic and irreducible_ MDPs during the _policy evaluation_ step, in line with results of Ref. [2]. Our numerical simulations with a GridWorld environment with an _absorbing state_ (so that the MDP is not irreducible) confirm this; please see Figure 3 (link below). Indeed, existence of an absorbing state leads to R1-VI having the same performance as VI. We will update the manuscript to include this discussion. __Figures__ Figure 1: https://anonymous.4open.science/api/repo/r1vi_icml-2F14/file/rebuttal/policy_evaluations.pdf?v=82f0f88e Figure 2: https://anonymous.4open.science/api/repo/r1vi_icml-2F14/file/rebuttal/asynchronous_learning.pdf?v=ef94411a Figure 3: https://anonymous.4open.science/api/repo/r1vi_icml-2F14/file/rebuttal/reducible_maze.pdf?v=40f97d29 __References__ [3] Fujimoto et al. Why Should I Trust You, Bellman? The Bellman Error is a Poor Replacement for Value Error. ICML, 2022.
null
null
null
null
null
null
Rejecting Hallucinated State Targets during Planning
Accept (poster)
Summary: This paper addresses the issue of hallucinated state targets in model-based reinforcement learning (MBRL), where generative models can produce unrealistic or unreachable states, leading agents to delusional planning behaviors. Inspired by human cognition, the authors propose an evaluator that assesses the feasibility of generated targets and rejects infeasible targets before planning occurs. To ensure accurate feasibility evaluation, the authors introduce two hindsight relabeling strategies, generat and pertask, demonstrating significant performance improvements and reduced delusional planning behaviors. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical section is presented in the paper. Experimental Designs Or Analyses: I find the experimental designs reasonable and sufficiently aligned with the paper’s objectives. Supplementary Material: I reviewed the authors' submitted code and found it to be clear and reproducible. Relation To Broader Scientific Literature: The proposed approach builds upon prior model-based RL frameworks and opens the door to addressing the critical issue of hallucinated state targets through the explicit introduction of a feasibility evaluator trained via novel hindsight relabeling strategies. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: The paper addresses a critical issue in model-based RL—hallucinated targets—with implications for safety and performance. The idea of combining a feasibility evaluator with hindsight relabeling strategies has intuitive appeal. Proposed approach is a simple, plugin mechanism that could help mitigate incorrect updates or infeasible sub-goal selection in a broad class of model-based agents. The paper provides clear taxonomy of targets (G0, G1, G2) structures the problem effectively. The paper includes multiple experiments—on both decision-time and background planning settings—demonstrating the reduction of delusional behaviors and improved performance. Weaknesses: While the grid-world-based tasks are carefully designed to showcase the phenomenon of hallucinations, they remain relatively simple compared to more complex, high-dimensional environments (e.g., robotics or rich 3D worlds). It is unclear if the proposed approach will scale without additional engineering. My major concern is that the experiments are mostly on tasks where there is a clear, easy-to-obtain ground-truth for feasibility. This is ideal for demonstrating concept correctness but may be less straightforward to replicate or validate in continuous or partially observable domains. Other Comments Or Suggestions: Some discussion on the computational overhead would be valuable. Questions For Authors: How the feasibility evaluator performs in partially observable domains? Would the proposed approach naturally extend if the “source state” is uncertain? If the generator produces more abstract or higher-dimensional latent goals (e.g., language instructions or subtask descriptors), what changes are required for the feasibility evaluator?" Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: `HOW EVALUATOR PERFORMS IN PARTIALLY OBSERVABLE DOMAINS? WOULD THE APPROACH NATURALLY EXTEND IF THE “SOURCE STATE” IS UNCERTAIN?` We agree with your intuition that it naturally extends. The evaluator takes on paired inputs of the source state representation and the target representation, both are outputs from the TAP agent to which the evaluator is attached. NO change is needed when the evaluator is fed representations (that can encode uncertainty) from a POMDP-compatible encoder. We are confident about this, yet we know it may be more convincing to conduct experiments to provide empirical evidence. Unfortunately, ICML prohibits revisions this time. `IF THE GENERATOR PRODUCES MORE ABSTRACT OR HD LATENT GOALS (E.G., LANGUAGE INSTRUCTIONS OR SUBTASK DESCRIPTORS), WHAT CHANGES ARE REQUIRED FOR THE EVALUATOR?` Since the evaluator takes in target representations, our design needs NO CHANGE. We validated this empirically in Sec. 5.3. We tried to make clear that our approach currently only applies to TAP agents with a function $h$ verifying if a target is fulfilled. TAP agents’ $h$ gives us the compatibility to general targets including language instructions. Instead of changes to the evaluator, the focus should be on a proper $h$ (e.g., ask the LM if a state matches the target). We discussed this at the end of Sec. 7. `THE GRID-WORLD-BASED TASKS REMAIN SIMPLE COMPARED TO MORE COMPLEX, HD ENVIRONMENTS. UNCLEAR IF THE APPROACH WILL SCALE WITHOUT ADDITIONAL ENGINEERING. … THIS … MAY BE LESS STRAIGHTFORWARD TO REPLICATE OR VALIDATE IN CONTINUOUS OR PARTIALLY OBSERVABLE DOMAINS.` Our reasons for the selected experiments are as follows: 1. These environments provide much more convincing quantitative validations of our claims on hallucinated targets and delusional estimates. Common benchmarks such as Atari, due to the lack of access to ground truth, cannot be properly diagnosed. As a result, we cannot prove that our method can indeed reduce delusional planning behaviors in those environments due to their nature. 2. We aimed to show our approach’s generality by applying it on many categories of TAP methods. The compute demanded in these experiments already exceeds what our limited academic environment provides. 3. Visual simplicity does not mean task simplicity. Due to the multi-task, generalization focused setting, agents are met with difficult combinatorial challenges that even state-of-the-art hierarchical planning methods cannot solve well, see [Zhao et al., 2024]. As an example, despite the visual simplicity, the hallucination rates remain high even with the used SOTA methods. For the points above, we focused on depth rather than breath when considering the environments for our experiments. Our approach does not assume the input space to be discrete or continuous. We chose discrete input spaces because it is nearly impossible to solve the ground truths, used to provide convincing, rigorous analytics, in continuous spaces. For POMDP, please check our prior responses. `DISCUSSIONS ON THE COMPUTATIONAL OVERHEAD WOULD BE VALUABLE.` Technically, it is useful to view our solution as a special form of rejection sampling, where the proposal distribution and the support are provided by the generator, the target distribution is the re-normalized distribution over only feasible targets and feasibility estimates by the evaluator is used to determine the rejection. Thus, the more accurate the evaluator, the more efficient the sampling process. It is impractical to assume full access to either the target or proposal distributions because they change with the used 1) environment and 2) generator (whose output is not only different per method but also changing with learning). This means that we cannot give a blanket statement about the computational overhead, prompting us to have used only brief discussions about overhead in Sec. 4 (L189, left col.). Practically, each generated target needs only 1 evaluation. For background TAP agents that generate batches of targets, the improper ones can be rejected and the whole batch can be all rejected without problem (no "Dyna" update this time). For decision-time TAP agents, targets act as subgoals and when they are rejected, the agent can retry or commit to more random explorations. With DRL, the overhead also depends on evaluator’s networks, complexity of the state / target representations. However, since the evaluator is a rather lightweight secondary end-to-end network, we can expect evaluations to be fast. We added a condensed version of these comments to the revised manuscript. --- We appreciate your review and your concerns about the applicability of our approach to more generic cases. We explained to you our choices and wish you could perceive the difficulties we overcame in acquiring the convincing results with limited resources. We hope our responses addressed your concerns well and please consider increasing your rating. --- Rebuttal Comment 1.1: Comment: Thank you for the response and clarifications. I understand the reasoning behind the choice of grid-world tasks for rigorous analysis and quantitative validation, particularly regarding ground truth availability. That said, I still have concerns about how well proposed approach might generalize to more complex, high-dimensional environments without additional engineering. While the current experimental results are convincing for the selected tasks, it remains unclear how it would perform in other complex domains where ground-truth feasibility is not readily avaliable. Thus, I will maintain my current score (weak accept).
Summary: The paper proposes to augment Target-Assisted Planning (TAP) methods with an evaluator to reject generated states that are unfeasible and improve performance. The proposed method is evaluated on two environments: SwordShieldMonster (SSM) and RandDistShift (RDS) with 3 different TAP agents: Dyna, Skipper and LEAP. It proposes 4 different mechanisms to create targets for the evaluator. Future (F), which sample states in the future Episode (E), which sample states from the same episode Generate (G), which replace states by their generated version predicted by the generator Per task (P), which sample states from the same environment task The experiments show that a combination of target mechanisms such as (E+P+G) helps to further reduce the evaluator prediction errors and improve final success rates on these 2 environments. ## update after rebuttal I appreciate the authors clarification of the paper contributions. I understand that the algorithm tackles the rejection of hallucinated state predictions. The authors propose to learn an evaluator to reject these hallucinations. For that, the authors propose a combination of learning rules, including two novel relabelling strategies (PerTask and Generate). The technique is evaluated on two toy environments (SSM and RDS) providing game logic to prove the efficacy of the contributions. As final decision: I am still hesitant toward the generalization of the method. I think that the experiments done on the two environments are very compelling because they offer game logic for analysis. However, I think the paper would strongly benefit from experiments on commonly used benchmarks for model-based RL in order to provide general empirical results on top SSM and RDS (Comparison with concurrent approaches, ablation study on hallucination rejection). Visualizations of rejected predictions for some of the tasks would also help to highlight the method’s contribution. I hence maintain my score to weak rejection, tending toward borderline. Claims And Evidence: The experiments show that the method can successfully reduce delusional behaviors and enhance the the performance of planning agents. Methods And Evaluation Criteria: Yes, TAP and environments allows to correctly make into application the proposed method. Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: The contribution of this paper is related to model-based reinforcement learning agents that may suffer from hallucinations during the generation process. Early training of model-based approaches can sometimes impacted by the lower generation quality of the world model. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: - The paper tackles a very interesting problem linked to TAP, which is the generation of infeasible states by world models. TAP methods usually suppose that all generated states are feasible and reduce the agent training loss on all generated samples. This research direction deserves to be explored. - The paper demonstrates that learning an evaluator function can help to improve performance on the two chosen environments. The evaluator can successfully learn to progressively identify infeasible targets to remove them. Weaknesses: - The method is evaluated on only two simple environments (SSM and RDS). While the paper shows that performance can be improved by removing delusional predictions. It would be interesting to experiment on commonly used benchmarks such as Atari 100k or the DeepMind Control Suite. - From my understanding, the labeling of targets as feasible or unfeasible requires having access to game inner logics (have sword, have shield). Such that we can correctly label sampled targets as feasible or unfeasible given the start state. I do not see how it could be applied to environments where game logic is not accessible. - The paper talk about possible extensions to other TAP methods such as MuZero, SimPLe or Dreamer but do not perform experiments on these key methods. Other Comments Or Suggestions: I tend toward borderline but would be ready to increase my score to weak accept or accept if additional experiments and/or information are provided. Questions For Authors: - Figure 3 (d), 10 (d) and 12 (d) compare the performance of Skipper and LEAP when using different labelling mechanisms. What is the performance of these methods when the evaluator is not used during training to remove predicted infeasible states ? - From my understanding, the labeling of targets as feasible or unfeasible requires having access to game inner logics (have sword, have shield). Could this method be used on environments where game logics is not available to label states as feasible/infeasible ? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: `WHAT IS THE PERFORMANCE OF SKIPPER & LEAP WHEN THE EVALUATOR IS NOT USED TO REMOVE PREDICTED INFEASIBLE STATES?` As discussed in Sec. 2 (Line 57 right column), Sec 5.1.2 (L383 left col.) , Sec. 6 (L388 r. col.), these methods already use their own evaluators to remove the targets they think that are infeasible. Yet, they suffer from delusions due to the lack of exposure to truly infeasible targets absent from collected experience. Thus, our exps with both methods "reused" their built-in distance / discount estimators, which were *originally trained with basic relabeling strategies*. We showed that by fixing their relabeling, these methods become robust against delusional planning behaviors. Only for methods without built-in estimators at all, e.g., Dyna, we have baselines with the evaluator removed. In other words, exps with Skipper & LEAP focuses on the importance of training data for non-delusional estimates. While exps 5/8 - 8/8 validate the combination of architecture, learning rules and training data at the same time. Hope that you can now see that your wanted baselines are already there: they are the variants with "future" and "episode" (basic relabeling that causes delusions). We realized that this could be somewhat confusing and tried to clarify in the revised manuscript. `THE METHOD IS EVALUATED ON ONLY 2 SIMPLE ENVIRONMENTS (SSM AND RDS). WHILE THE PAPER SHOWS THAT PERFORMANCE CAN BE IMPROVED BY REMOVING DELUSIONAL PREDICTIONS. IT WOULD BE INTERESTING TO EXPERIMENT ON COMMONLY USED BENCHMARKS …` Our reasons for the selected experiments are as follows: 1. These environments provide much more convincing quantitative validations of our claims on hallucinated targets and delusional estimates. Common benchmarks such as Atari, due to the lack of access to ground truth, cannot be properly diagnosed. As a result, we cannot prove that our method can indeed reduce delusional planning behaviors in those environments due to their nature. 2. We aimed to show our approach’s generality by applying it on many categories of TAP methods. The compute demanded in these experiments already exceeds what our limited academic environment provides. 3. Visual simplicity does not mean task simplicity. Due to the multi-task, generalization focused setting, agents are met with difficult combinatorial challenges that even state-of-the-art hierarchical planning methods cannot solve well, see [Zhao et al., 2024]. As an example, despite the visual simplicity, the hallucination rates remain high even with the used SOTA methods. For the points above, we focused on depth rather than breath when considering the environments for our experiments. `FROM MY UNDERSTANDING, THE LABELING OF TARGETS AS FEASIBLE OR UNFEASIBLE REQUIRES HAVING ACCESS TO GAME INNER LOGICS (HAVE SWORD, HAVE SHIELD). SUCH THAT WE CAN CORRECTLY LABEL SAMPLED TARGETS AS FEASIBLE OR UNFEASIBLE GIVEN THE START STATE. I DO NOT SEE HOW IT COULD BE APPLIED TO ENVIRONMENTS WHERE GAME LOGIC IS NOT ACCESSIBLE.` We respectfully note that this is a misunderstanding; our approach distinguishes itself from methods that require access to more environmental mechanics. The proposed evaluator figures out the feasibility of all targets through Eq. (2) without the need for labels: **Eq. (2) exploits the fact that infeasible targets will never be reached**. Eq. (2) enables the evaluator to learn as a secondary system alongside the TAP agent to which it is attached, and figure out from data, the feasibility of the sampled targets it is trained on. Ground truths are only used for experimental analyses, our solution operates without ANY. `THE PAPER TALK ABOUT POSSIBLE EXTENSIONS TO OTHER TAP METHODS SUCH AS MUZERO, SIMPLE OR DREAMER BUT DO NOT PERFORM EXPERIMENTS ON THESE KEY METHODS.` In Table 2, we categorized the compatible methods into 3 categories of similar behaviors, and in experiments, **we implemented one representative method for each of the categories**. We made a conscious effort to provide you with the most convincing results out of the limited resources at our disposal. We hope that you understand our difficulties in an academic setting. --- Thank you for your comments. Your biggest concern was about the applicability of the method without access to feasibility labeling. We hope that our explanations will address our miscommunication on this point. For your comments on experiments, we clarified that the certain baselines in the mentioned figures are the original performance of the agents, as both Skipper and LEAP are equipped with built-in feasibility-like estimators, which we dual-purposed as evaluators. We are thankful that you acknowledged that “this direction deserves to be explored” and hope that our responses have addressed your concerns, and you could increase your rating of this work to recognize our contributions. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Your rebuttal addressed my misunderstanding related to the contributions of the paper, choice of environment used and comparisons. It appears that the main contributions of the paper are (not to identify hallucinated model predictions, which was already proposed by Nasiriany et al. (2019); Zhao et al. (2024); Lo et al. (2024)), but: - 1. an evaluator model that do not require ground truths for training, in contrast to previous approaches - 2. the combination of labeling strategies to decrease feasibility error and improve performance - 3. two novel labeling strategies (PerTask and generate) This somewhat downgrades the estimated novelty of the paper compared to my initial review. Moreover, reviewer m9Mj pointed out that "Generate" and "PerTask" are similar to previous works (Zawalski et al., Andrychowicz et al.), which downgrades contribution number 3. In light of these clarifications on the paper contributions, I see the introduction of a hallucination detection method that does not require "ground truths" as notably more insightful for the community than the contributions highlighted in the paper. This reinforces the initial point that I made on the application of the method on Atari 100k. I think the paper would greatly benefit from the application of the method on commonly used environments that do not provide ground truths / inner logic. Proving the effectiveness of rejecting hallucinations on diverse and commonly acknowledged benchmarks like Atari 100k, Crafter on top of the existing SSM and RDS experiments would provide general empirical results on the effectiveness of the method rather than a proof a concept. I choose to maintain my initial weak reject rating leaning more toward borderline, but I do not oppose the paper to be accepted. W1: I think the paper proposes an interesting solution to reject hallucination without access to environment inner logics but instead highlights orthogonal novelties that are less notable. W2: The benchmarking of the method is limited due to the comparison with previous approaches that requires "ground truths", also due to the lack of computing resources. --- Reply to Comment 1.1.1: Comment: We really appreciate your comments and your open-mindedness to our explanations. First, we sincerely agree to your points on more general empirical results. But as we explained, we are shy of resources to make it happen. **We'd like to point out that several claims in your newest response are not factual and we believe, based on the good will you have shown in your reply, our explanations here would make you find our contribution to be more than acceptance-deserving**. --- `not to identify hallucinated model predictions, which was already ...` The 3 works mentioned here did NOT identify hallucinated model predictions. In fact, **this submission is indeed the 1st work that systematically studies hallucinated targets in planning**. Our previous explanations (on Skipper & LEAP having their built-in estimators to identify infeasible targets) may have confused you. As investigated in this work, there are different types of infeasible targets, notably including 1) those that appear in the interaction history (which Skipper and LEAP could identify) and **2) those that are never going to be experienced by agents** (hallucinations, that most existing methods, including the two, CANNOT identify). The latter kind is the focus of this work. `1. an evaluator model that do not require ground truths for training, in contrast to previous approaches / W2: ... due to the comparison with previous approaches that require ground truths` We would like to be honest and point out that the learning rules in existing approaches, e.g., Skipper, already do not assume access to ground truths (to figure out the feasibility of targets). Yet, what distinguishes this work from the previous ones is that **despite the proper auto-discovery learning rules, previous methods will not lead to correct understanding of the hallucinated targets that are never going to be experienced** (the latter type in the previous point). This work identified how such feasibility delusion (*delusions are errors persistent due to design flaws and cannot be addressed by more training*. We use these terms rigorously) is formed and proposed to use the relabeling strategies to provide correct data exposure. W2: The previous approaches do NOT need the access to ground truths, as they are not even aware of the hallucinated targets. Rather, we used these ground truths to obtain quantitative metrics to show that 1) hallucinated targets exist, 2) they cause delusions, which causes delusional plans 3) addressing them leads to better performance in many TAP methods. We believe that proving our approach's effectiveness directly is more convincing than blindly showing only the performance boosts. On Atari, this would've been impossible. `2. ... to decrease feasibility error and improve performance` As explained previously, our contribution addresses the feasibility delusions (the portion of feasibility error corresponding to the hallucinated targets, un-learnable by most existing approaches). `strategies are similar to ... existing ... downgrades contribution` The premise of this work is to raise awareness of the problem of hallucinated targets within a wide range of methods (TAP). We want to provide you with our thoughts on the contributions of this work: - As a first, we systematically investigated the properties and impact of different kinds of infeasible targets. Most notably, G1 and G2 and the generic set correspondence. Guided by analyses, we devised a generic target evaluator that rejects infeasible targets (both kinds) that can work for many TAP agents with different planning behaviors. - We shared our desiderata, in that the evaluator should act as an add-on **without the need to change the behavior nor the architectures of the agent it is attached to**. - We highlighted that, without proper training, the evaluator WILL produce **delusional estimates**, just in existing methods such as Skipper, rendering the evaluator-based solution futile. Notably, many TAP methods such as those with fixed planning horizons do not have a feasibility-like estimator to begin with and blindly accepts all generated targets. - From the data exposure perspective, we analyzed why learned evaluators become delusional. And we proposed to use 1) 2 alternative relabeling strategies that work hand-in-hand with 2) an efficient architecture with distributional outputs and 3) off-policy compatible learning rules capable of discovering the feasibility of **all exposed targets** (most existing methods are oblivious to the latter kind of hallucinated targets). - Our experiments validate significant reductions in delusional behaviors and enhancements in the performance of several kinds of TAP agents. --- Your reply gave us a glimmer of hope that this work may be accepted, which we firmly believe that it deserves. **We believe that your current evaluation is still impacted by some miscommunications, and that is why we are taking this urgent reply for you to reconsider.** Thank you!
Summary: The paper analyzes the issue of generating invalid subgoals during planning. The authors categorize different failure modes and propose strategies for learning a classifier that can be used to estimate the distance to a proposed goal, including whether it is reachable at all. Through experimental evaluation in a grid-based task, the paper analyzes the impact of different learning strategies on the effectiveness of the evaluator. ## Update after rebuttal Thank you for the answers. I acknowledge the differences between the proposed strategies and those present in the literature. However, I believe these differences need to be discussed more precisely in the paper to better highlight the contribution -- as noted by other reviewers as well, this is not currently clear. I leave the particular choice of references to the authors' choice. I acknowledge the authors' focus on evaluating the benefits of a non-delusional evaluator compared to a standard one. However, completely omitting the non-evaluator aspect makes the analysis incomplete. Even if performance without the evaluator is significantly weaker, this should be demonstrated and briefly remarked. There is value in advocating for a non-delusional evaluator only if using an evaluator is beneficial in the first place, even if the overall focus of the paper is slightly different. In summary, I believe the paper is a solid contribution, but a careful revision would considerably strengthen its impact. Specifically, I suggest: - Revising the stated contributions as discussed, - Clarifying the novelty of the proposed strategies, - Including a naive non-evaluator baseline in the comparison, - Adding experiments in widely studied environments. With such improvements, I would consider the paper very strong. For now, I remain on the fence. I acknowledge the changes proposed by the authors, which seem to move in the right direction. I reflect this by increasing my rating. I would not oppose accepting the paper, though I believe it could still be significantly strengthened with minor effort. Claims And Evidence: The paper is generally sound, although the main claims should be reformulated, as they are too optimistic. The idea of training a model to identify infeasible subgoals was already proposed, although possibly not extensively studied (see e.g. [Zawalski et al.]). Furthermore, the proposed relabeling strategies are not "novel", as Generate is similar to the one used e.g. in [Zawalski et al.], and Pertask is equivalent to Random from [Andrychowicz et al.]. While itself it does not deny the contribution of the paper, the formulation of the main contributions has to be revised. I believe that is why the paper lacks focus. Additionally, the paper only indirectly argues that identifying the invalid subgoals is useful in general. For instance, the main claims do not state any performance advantage, and the experiments focus on comparing different strategies of training the evaluator rather then the impact of evaluator on performance. The only somewhat relevant plot is Fig 3d (and counterparts in the appendix). A very good step in this direction is Section 5.2. The paper would strongly benefit from including more evaluation of this kind, i.e. demonstrating that various methods can do much better having the evaluator. Currently I see little such discussion, which is a pity. References: [Zawalski et al.] _Fast and Precise: Adjusting Planning Horizon with Adaptive Subgoal Search_ [Andrychowicz et al.] _Hindsight Experience Replay_ Methods And Evaluation Criteria: The presented evaluation makes sense for the formulated main claims. However, slightly changing the scope to cover the usefullness of the evaluator, possibly also in well-established benchmarks, would improve the contribution. Theoretical Claims: There is one theorem: Result 4.1. No proof is referenced, but since it is rather straightforward, it needs no proof. Experimental Designs Or Analyses: Yes, I checked the experiments in the main part. Supplementary Material: I skimmed the whole appendix, with special attention to Table 2, Figure 6, Figure 7, Table 3, Figure 10, Figure 16. Relation To Broader Scientific Literature: The problem of detecting invalid subgoals is not new and has been (at least partially) studied in previous works. Variants of the proposed strategies for learning the evaluator can be also found in related works. However, I am not aware of a systematic study of this topic, so it has the potential to be a good contribution. Essential References Not Discussed: One additional reference I would like to be discussed is [Zawalski et al.], as detailed in the comments. Other Strengths And Weaknesses: While the paper has potential, it should be much more focused. Too little space is reserved for experiments, and because of that many of them had to be moved to appendix. I suggest making the analysis more concise and providing broader experimental support. I suggest working on the main contributions to establish the focus. Something around ["Systematic analysis of infeasible subgoals issue", "Effective training and architecture for Evaluator", "Experimentally validating the impact of Evaluator on performance"] could be a good starting point. Other Comments Or Suggestions: Are the targets for the Generate strategy additionally generated during sampling from buffer, or are they the targets generated by the generator during collecting the experience and stored afterwards? I suppose the former, but that should be made clear in Sec 4.2.1. Questions For Authors: 1. Please discuss the impact of having the evaluator on the performance of hierarchical methods. The naive approach to invalid targets is to ignore them, as they are not very common, and even invalid subgoal guidance lead somewhere, from where another (hopefully valid) sobgoal can be generated. Are the methods equipped with Evaluator more effective? What is its computational overhead? 2. Please discuss the relation of the proposed relabeling strategies (claimed to be novel) to [Zawalski et al.] and [Andrychowicz et al.] Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We reordered the questions to streamline our response. `Discuss the relation … to [Z et al.] & [A et al.]. Are the targets for “generate” generated during sampling, or …?` [Z et al.] Our approach of rejecting infeasible targets is indeed similar to that in [Z et al.]. Yet, the approaches differ significantly. [Z et al.]’s verifier trains on a collected dataset that informs about the generated subgoals (not relabeled) & requires separate training for the verifier. Unlike the method-specific approach in [Z et al.], we propose a generally applicable secondary system running alongside TAP agents, using relabeling, and thus it can figure out the infeasible targets (sets of states) that were not followed, without extra interactions or supervised training. We wanted to try our best to acknowledge the developments leading to ours. This is why we discussed MHER (Yang et al., 2021) in Sec 6 for the 1st work of model-based to relabeling transitions (even closer to “generate”) and (Jaferjee et al., 2020) for their early idea of rejecting delusional generations. It was an *honest mistake* that we missed [Z et al.] and we now added proper acknowledgements. "generate"& JIT relabeling "generate" is novel not only for its flexibility to be used just-in-time (JIT, only relabeling after batch is sampled to adhere to the generators’ changing output distributions), but also for its effectiveness to address feasibility delusions. Compared to approaches like MHER, "generate" lowers needed storage and provides timely coverage of the generators’ outputs, especially helpful in continual learning settings. "pertask" & [A et al.] "pertask" is ONLY equivalent to “random” in [A et al.] when agents are trained on a single task. In settings where agents are trained on a few tasks and are expected to generalize during evaluation (where our exps were based, to force the evaluators to understand infeasible targets, instead of memorize), "pertask" enables relabeling beyond trajectory-level against delusions, per Sec 5.1. `the exps focus on strategies of training rather than the impact of evaluator on performance. … only indirectly argues that identifying invalid subgoals is useful. the main claims do not state any performance advantage … the impact of the evaluator on the performance of hierarchical methods?` **This is a crucial misunderstanding that we wish to clarify, so we can resolve your other concerns effectively.** Our aim was not to show methods can do better with an evaluator, but a non-delusional one. Exps on the two HP agents focused on the fact that their basic relabeling strategies produce delusional evaluators (they have built-in evaluators). Only for methods without estimators to begin with, e.g. Dyna, we need to completely inject an evaluator. We discussed these explicitly in Sec 2, Sec 5.1.2 & Sec 6. Due to the reply limit, plz find MORE details on this, in the 1st reply to Reviewer GvFL. We added clarifications in the revision. `The naive approach to invalid targets is to ignore, as they aren’t very common, and even invalid subgoal guidance lead somewhere, from where another subgoal can be generated.` In Sec 2, we formulated your described scenario for the naïve approach, which motivated our contributions. The naïve approach only works when 1) infeasible targets are indeed rare and 2) the state space is simple. Also, the approach is boosted by 3) survivorship bias. 1. There are generally no measures of how frequent infeasible targets can appear, because most existing methods are not tested on proper environments, that can be solved to produce the true frequency of infeasible targets. 2. it is not hopeful that invalid guidance could lead agents to a recoverable region of a complex state space 3. Empirical observations of how naïve approaches could work are influenced by survivorship bias, as methods that are more affected by delusional plans have worse performance and are thus less likely to be recognized. `computational overhead?` Plz see our reply to Reviewer bSxw (last point). `the main claims should be reformulated, as they are too optimistic. The idea … to identify infeasible subgoals was already proposed … ` We hope our previous explanations addressed your concerns on this point. We summarized our work’s claims and tried our best to clarify in the revision: Building on the ideas of rejecting invalid goals such as in [Z et al.], our extensive study reveals the types of infeasible targets in TAP agents. Accordingly, the generality of our proposed solution leads to non-delusional feasibility estimates beyond HP methods. --- We appreciate your detailed review! The intention of this work is to inform the research community about the associated risks and improper assumptions and save everyone’s time and effort. We take your comments very seriously and tried our best to address your concerns. We hope you can increase your rating to recognize our attitude and the positive impact this work could have.
null
null
null
null
null
null
null
null
Scalable First-order Method for Certifying Optimal k-Sparse GLMs
Accept (poster)
Summary: In this work, the authors proposed a novel FISTA-based algorithm for computing a lower bound in the Branch-and-Bound method. The new algorithm utilizes several customized components and outperforms universal optimization solvers on both artificial and practical datasets. Please see the following sections for my detailed comments. Claims And Evidence: Due to the time limit, I did not check the correctness of the theory, except those briefly mentioned in the main paper. The theoretical claims seem correct by checking the main paper. Methods And Evaluation Criteria: The methods and evaluation criteria make sense to me. Theoretical Claims: Due to the time limit, I did not check the correctness of the theory. The theoretical claims and the proofs in the main paper seem correct. Experimental Designs Or Analyses: The experiment settings and results are sound and reasonable to me. Supplementary Material: I did not check the supplementary material due to the time limit. Relation To Broader Scientific Literature: This paper is related to the topic of ICML conference and should be interesting to audiences from machine learning and optimization fields. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Please see my comments in other sections. Other Comments Or Suggestions: - Algorithm 3, Line 5: should the step size be $\phi / (\phi + 3)$? Questions For Authors: Please see my comments in other sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reading our paper! This is indeed a typo. The implementation is correct in our submitted source code. We will fix this typo in the revision.
Summary: This paper explores the use of branch-and-bound (BnB) frameworks to solve sparsity-constrained optimization problems. The authors derive a lower bound for the optimization problem using a perspective relaxation formulation. To efficiently solve the resulting perspective relaxation, the authors employ a first-order proximal gradient algorithm. Extensive experiments on both synthetic and real-world datasets demonstrate that the proposed approach accelerates dual bound computations and improves efficiency. Claims And Evidence: yes Methods And Evaluation Criteria: The parameter $k$, set to $k \in $\{5,10,15\}$, is too small. Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: most of the supplementary material Relation To Broader Scientific Literature: Most of the results are empirical, with a lack of theoretical analysis. Essential References Not Discussed: yes Other Strengths And Weaknesses: ### **Strengths:** - **S1.** The proposed method significantly accelerates computations compared to previous BnB algorithms. - **S2.** The authors derive the proximal operator of the function $g(\beta)$ in problem (7). The authors demonstrate that the customized PAVA algorithm in Algorithm 1 has an $\tilde{O}(p)$ time complexity. These results are reasonable to me. ### **Weaknesses:** - **W1.** The main novelty of the paper is unclear. The perspective formulation is not new, the BnB framework and the APG algorithm are well-established, and the Pool Adjacent Violators Algorithm (PAVA) has already been used for solving isotonic regression problems and SLOPE models. - **W2.** The justification for using the perspective formulation to estimate the lower bound is not well explained. - **W3.** The Customized PAVA algorithm in Algorithm 1 appears to be an incremental extension of the standard PAVA algorithm. Its design is primarily based on leveraging the property that the optimal solution of problem (11) maintains the same order as the input data $\mu$. While the algorithm introduces some new ideas, its overall contribution seems limited. - **W4.** The parameter $k$ used in the experiments is too small, set to $k \in \{5,10,15\}$. With such small values, standard local algorithms such as OMP, iterative hard thresholding, BCD methods would likely perform well. This raises concerns about whether the proposed method offers a meaningful improvement in real-world applications. Other Comments Or Suggestions: No Questions For Authors: - **Q1.** How does the scalability of the proposed method change when $k = 0.1p$? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for thoughtful feedbacks. 1. **k is not large enough** a. *Many real-world datasets are naturally sparse*. Small $k$'s are sufficient for accurate prediction and can help avoid overfitting, especially on the validation set. Small $k$ also improves interpretability. This is true for the two real-world datasets used in the paper. We can follow the experimental setup section (Line 973-977) to select the correct $k$ via cross-validation. The best $k$ for Cancer Drug Response is $5$ (already used), and the best $k$ for DOROTHEA is $26$ (in the submission, we used $15$). Below is the BnB runtime comparison when $k=26$ for DOROTHEA. Our method is still significantly faster than MOSEK. | | MOSEK | ours | |:---:|:---:|:---:| | running time (s) | 661.33 | 228.34 | b. *Some data science practitioners require $k$ to be small*. This is the case for the sparse identification of dynamical systems (Bertsimas & Gurnee, 2023). Physicists often want the differential equation to be succinct in order to match some physical intuition or prior knowledge. c. Lastly, *our lower bound calculation is still fast when $k$ is large*. We re-ran Fig. 2 (main paper) with $k=500$ (runtime in seconds):. We only compare with MOSEK as other three baselines are already struggling when $p=4000$. Linear regression: | method | p=1000 | p=2000 | p=4000 | p=8000 | p=16000 | | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | | MOSEK | 0.970 | 3.097 | 13.565 | 115.554 | 573.247 | | ours | 0.327 | 0.876 | 10.623 | 44.638 | 108.475 | Logistic regression: | method | p=1000 | p=2000 | p=4000 | p=8000 | p=16000 | | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | | MOSEK | 1.794 | 9.266 | 49.801 | 284.638 | 1969.305 | | ours | 0.601 | 5.993 | 42.802 | 177.481 | 640.345 | 2. **Novelty is unclear: perspective formulation, BnB algorithm, APG algorithm, and PAVA are not new** We clarify the distinction between prior work and our contributions: a. *BnB*: Our key innovation lies not in using BnB but in developing an efficient 1st order method for computing the BnB lower bounds. b. *Perspective relaxation*: Again, the novelty is not using the perspective relaxation but in solving it. c. *APG*: Although the APG framework is standard, its direct application to our problem is impossible without our method to evaluate the proximal operator efficiently and exactly. d. *PAVA*: Novelty is not using PAVA but linking $\text{prox}_{\rho^{-1}g}(\cdot)$ with PAVA via generalized isotonic regression reformulation and then invoking the Moreau decomposition Theorem. Additional contributions include: a. Lemma 3.1 shows $g^*$ is a simple composition of the Huber loss and the top-$k$ norm. b. Our Algorithm 2 computes $g(\beta ^{t+1})$ exactly and efficiently, crucial for restart. c. Our lower bound computation is GPU-friendly 3. **Justification for perspective formulation is not clear** Our main goal is to certify optimality, which necessitates using a BnB framework, whose efficiency depends strongly on obtaining tight lower bounds. Our perspective relaxation produces tighter lower bounds than traditional relaxation approaches (such as the $\ell_1$ relaxation in our Q1 to Reviewer enX8). 4. **Greedy/heuristic algorithms** We completely agree that heuristics are very effective. The heuristic method used in our work is already a variant of OMP (Line 983-989; also see our source code) and can sometimes find the optimal solution even within a few seconds (the whole BnB algorithm can still take over hundreds of seconds though). However, this is not the focus of this work and orthogonal to our goal, which is to certify optimality. Our contributions lies in efficiently computing the lower bounds at each node. 5. **Meaningful real-world applications for certifying optimality** Please see Section 1.2. Related Works for more applications. We highlight a few below: a. *Sparse identification of dynamical systems (Bertsimas \& Gurnee, 2023)*: The datasets contain highly-correlated features, on which solving to optimality can greatly outperform heuristic methods. b. *Medical scoring systems (Ustun \& Rudin, 2019)*: Certifying optimality ensures that physicians provide the best disease diagonostic system to the patients. c. *Portfolio optimization (Bienstock, 1996)*: Solving the problem to optimality ensures that companies are picking the optimal financial trading strategy. We sincerely appreciate your thoughtful questions and engagement with our work. We hope to have answered to your concerns and questions and we would be happy to provide further explanations.
Summary: This paper studies sparse generalized linear models with cardinality constraints. Existing branch-and-bound methods are not computationally efficient due to expensive or slow dual bound computations. To overcome this, the authors propose a first-order proximal gradient algorithm to solve the perspective relaxation of the problem. The method eliminates the need for costly second-order cone programming and incorporates a restart strategy to accelerate convergence. Experiments demonstrate improvements in computation. Claims And Evidence: The title of the paper is somewhat misleading, as the proposed method primarily relies on a convex relaxation rather than directly addressing the original sparse ridge regression problem (1). Additionally, the claims about the optimality gap are supported only by numerical experiments, without theoretical justification. Methods And Evaluation Criteria: - The theoretical result in the paper only focus on the properties in computation while lacks of results in optimization. For example, what is the quality of solution? - What is the empirical performance the method when applying the method to Poisson regression? Theoretical Claims: I have reviewed Appendix A and examined the proof of Lemma 3.1 in detail. While the proof appears to be correct, some assumptions are not clearly stated or justified in the paper. Experimental Designs Or Analyses: The experimental design does not consider scenarios where $k$ is large (e.g., $p/2$), especially when $p$ is also large. Supplementary Material: Exactly! It includes the implementation of the method and the experiments. Relation To Broader Scientific Literature: The key contributions of this paper align with the broader literature on optimization with sparsity constraints. Thus, it may be helpful for high-dimensional data analysis. Essential References Not Discussed: Using GPUs to speed up sparse learning is not new. There is existing literature on this topic (see, e.g., [1]). Therefore, I believe some statements in this paper to be overly exaggerated. - [1] Blanchard, Jeffrey D., and Jared Tanner. "GPU accelerated greedy algorithms for compressed sensing." Mathematical Programming Computation 5.3 (2013): 267-304. Other Strengths And Weaknesses: - Strengths The empirical results show that the proposal is promising for reducing computational time. - Weaknesses - I believe the special case of Lemma 3.1, where $M=\infty$, likely appears in existing literature, but this is not discussed in the paper. - The paper is not well-structured: - It is confusing about the problem the authors aim to solve. - Some information about FISTA should be presented before the Methodology section. - The assumptions in the paper are not clearly stated. Other Comments Or Suggestions: Typos: - Table 1: p should be presented in a math form. - "Alas" --> "Also" Questions For Authors: Please refer to **Claims and Evidence**, **Methods and Evaluation Criteria**, and **Weaknesses**. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback. We briefly restate our contribution to avoid any misunderstanding. Our paper addresses certifying optimality or quantifying the optimality gap for the $\ell_0$-constrained GLMs, \textbf{without making any assumptions on the data}. Our motivation stems from the need for optimal solutions in high-stakes applications, where solving the $\ell_0$ problem ensures superior quality, interpretability, and trustworthiness compared to heuristics. However, the problem is NP-hard, necessitating a BnB algorithm. This BnB framework consists of several key components, including heuristics, branching, bounding (calculating lower bounds), pre-solving, among many others. The effectiveness of BnB methods strongly depends on the quality and speed of solving its continuous relaxation (lower bound). In this work, we have proposed a first-order method to solve it efficiently. By doing so, we have significantly expanded the size/scale of the datasets on which the BnB approach can be successfully applied. With this explanation and context, we can now answer the questions raised by the reviewer. 1. **Confusion about the problem to solve** Our contribution is tailored at efficiently computing the lower bound for the $\ell_0$-constrained GLMs as the main workhorse for solving the full problem by BnB. We hope that by adjustments to the paper (and the title, see below) this confusion is removed. 2. **Optimality is supported without theoretical justification** It should be now clear that optimality is guaranteed by the use of the BnB algorithm. 3. **Title is misleading** Hopefully we have clarified the reviewer's misunderstanding and explained why the title represents our goal and contribution well. However, we are open to discuss an alternative title along the line of ``Certifying Optimality of k-Sparse GLMs by Scalable First-order Lower Bound Computation". 4. **Some assumptions are not clearly stated or justified** We do not impose any assumption on our datasets. However, if the reviewer thinks there are specific parts of paper that require more explicit discussion, we are open to follow them up to improve the clarity of the paper. 5. **Cite Blanchard and Tanner** Thank you for this reference. We will cite this work in revision. While both works use GPU acceleration, they address fundamentally different problems. The cited work focuses on accelerating greedy/heuristic methods through GPUs. There is no requirement of proving optimality. In contrast, our work harnesses GPU specifically for computing lower bounds in BnB, which is crucial for pruning nodes and certifying optimality. By relying solely on matrix-vector multiplications rather than solving linear systems, we achieve the first truly GPU-efficient implementation for computing the BnB's lower bounds. 6. **$k$ is not large enough** Due to space limit, please see our reply (Q1) to Reviewer X5XP. 7. **Poisson regression** We run the setup in Figure 2 (main paper) with Poisson regression for MOSEK, ours, and ours+GPU. The running time (in seconds) are reported below. "***" denotes reaching the time limit (1800s). | method | p=1000 | p=2000 | p=4000 | p=8000 | p=16000 | | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | | MOSEK | 2.537 | 15.505 | 136.790 | 919.412 | *** | | ours CPU | 2.505 | 12.821 | 95.583 | 572.876 | *** | | ours GPU | 2.637 | 6.142 | 5.933 | 17.338 | 25.954 | The GPU version of our method scales very well on large-scale instances. 8. **Cite more papers when $M=\infty$** Due to space limit, please see our reply (Q1) to Reviewer 3khc. In revision, we will cite those two papers and discuss the key differences. 9. **More background information on FISTA** Thank you for the suggestion. In revision, we will include a section in the appendix covering the FISTA algorithm and the Nesterov acceleration method. 10. **usage of alas** Thank you very much. We will change ''alas'' to make clear that we meant the sentence in adversative form, i.e., we will use ''however'' in the revision. We sincerely appreciate your thoughtful questions and engagement with our work. We hope to have answered to your concerns and questions and we would be happy to provide further explanations.
Summary: The paper proposes a new algorithm for solving lower bounds in the BnB framework. It solves a composite problem using an efficient restarted FISTA algorithm, for which an efficient way to exactly compute the function value and the proximal operator is given. As such, the paper allows to achieve impressive efficiency gains compared to existing methods. Claims And Evidence: The authors prove their main claims or give appropriate references when necessary (Section 2 and 3), and provide extensive experiments to show the advantage of their method (Section 4) Methods And Evaluation Criteria: The setting of the paper makes a lot of sense: it builds on existing methods (BnB methods using lower bounds based on relaxations, to solve sparse optimization problems), and replaces the part of the lower bound computation by an efficient FISTA algorithm using a novel proximal operator. Theoretical Claims: I did not check the correctness of the proofs in details, but I am familiar with similar literature on the computation of proximal operators for sparse penalties, and the results make sense to me at first sight. Experimental Designs Or Analyses: The experiments are synthetic experiments and their setting makes sense to me. Supplementary Material: I have read the appendix. Relation To Broader Scientific Literature: This paper give a more efficient way to solve sparse optimization problems, which is very useful in many areas of machine learning such as medical scoring systems or portfolio optimization, as described in the paper's introduction. Essential References Not Discussed: This is just a suggestion, but I think it might be interesting to cite the papers [1] or [2] below related to the k-support norm. Indeed, the function $g^{\*}$ in the paper is very similar to the (half-squared) top k-norm (the top-k norm is the norm of the top-k elements in absolute value), which is the dual function of the (half-squared) k-support norm (since the top-k norm is the dual norm of the k-support norm). More precisely, for a given $\boldsymbol{\alpha}$, if all for all $i$, $|\alpha_i| \leq M$, then $g^*(\boldsymbol{\alpha})$ is exactly the half-squared top-k norm. Now, the proximal operator of the k-support norm is known [1] and can be efficiently computed from [2]: therefore I wonder if a similar technique could be used to give an alternative computation for the proximal operator of $g^*$ (related to the one of $g$). Indeed, Algorithm 1 seems to involve different techniques, with the merging of intervals. In any case, I notice that in [2], the complexity of the proximal operator is in $O(p log(p))$, which is the same as the proximal operator in this reviewed paper (assuming $\tilde{O}(p)$ in the paper means $O(p \log(p))$, not $O(p \text{poly}(\log(p))$ for some higher order polynomial ? Maybe the authors can confirm). Therefore, I think this reviewed paper gives a very efficient formulation, and therefore this remark is more just for sake of curiosity rather than a suggestion for improving the algorithm. [1] Sparse Prediction with the k-Support Norm, Argyriou et al. [2] New Perspectives on k-Support and Cluster Norms, McDonald et al. Other Strengths And Weaknesses: I think this paper uses ingenious techniques to make the computation of lower bounds in BnB much more effective. The paper uses innovative techniques for the computation of the proximal operator, and I think the restart technique is also welcomed, to improve the efficiency of FISTA: that last technique actually makes a big difference as the authors show in their experiments. The overall complexity of the algorithm is shown to be much better than state of the art solvers like Gurobi, which is very encouraging. Other Comments Or Suggestions: See questions below. Questions For Authors: I first have just a small question regarding the comparison with the k-support norm, although the authors can omit it as it is just for sake of curiosity rather than a suggestion of improvement. And I would also have another second question regarding the performance of the full algorithm plugged in the BnB framework, to solve (1). Indeed, I think it would be interesting to compare the complexity or running time of solving the full optimization problem (1) with the method in the paper (BnB with lower bounds based on the improved FISTA) vs. with other methods. Can we already say that the method in the paper would also be more efficient than other methods for solving (1) ? More precisely: this paper provides a more efficient first order method to compute lower bounds for BnB. In the introduction, it is said such first order methods are usually slow, but give tighter bounds than methods based e.g. on conic relaxations. Now, since the method in the paper is faster, can we say that overall (i.e. for solving the full (1)), it is a better methods than all other alternatives ? Under which conditions is it ? (Note: I am less familiar with the literature on BnB so feel free to react if my question needs to be reformulated/updated) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address their concerns below: 1. **Connection to Argyriou et al. and McDonald et al.** Thank you for these valuable references--we will cite and discuss both papers. We fully agree with your observations. We clarify some key differences: **Difference 1** Our function $g(\beta)$ generalizes the k-support norm. Specifically, when $M=\infty$, our formulation reduces exactly to the standard k-support norm. However, our work makes several novel contributions beyond prior works: a. We are the first to study the Fenchel dual of $g$ b. We derive an explicit closed-form expression for $g^*$ (Equation 9) c. Our analysis captures bounded variable constraints ($ M < \infty $) **Difference 2** From a computational perspective, ours differs fundamentally from prior works: a. Algorithmic Framework: Both cited papers compute the proximal operator directly in the primal space (their Algorithm 1's), while our paper employs a dual approach by computing the proximal operator of the dual function $g^*$ and applying the Moreau decomposition theorem (Equation 12) b. Constraint Handling: To the best of our knowledge, the primal approach struggles to incorporate box constraints effectively, whereas our dual framework naturally accommodates box constraints while maintaining computational efficiency. c. Implementation Advantages: The dual perspective provides a more flexible computational framework, enabling efficient handling even for constrained optimization scenarios 2. **Does $\tilde{O}(p)$ mean $O(p \log(p))$?** Yes, this is correct. In Algorithm 1, the most expensive step is the sorting step (line 3), which has computational cost $O(p \log(p))$. All other steps have computational costs $O(p)$. Thus, the overall computational cost is $O(p \log(p))$. 3. **Is our method (BnB with lower bounds computed by our Algorithm 3) better than all other methods in solving the overall Problem 1? Under what conditions?** To properly address this question, let us first provide some context about BnB algorithms. Problem (1) is both nonconvex and NP-hard. Thus, any method capable of certifying optimality, including ours, must ultimately rely on BnB. This BnB framework consists of several key components, including heuristics, branching, bounding (calculating lower bounds), presolving, among many others. Our paper focuses on rapidly solving relaxation problems at both the root and node levels. This, in turn, provides fast lower bound certificates for the BnB algorithm. With this background information, we can now more directly address your 1st question. The answer is yes in the sense that it is currently the method that achieves the best results and scalability thanks to efficient lower bound computation. The key advantage lies in our formulation's compatibility with first-order optimization methods, which enables warm-starting and maintains computational tractability even for large-scale problems. While there exist theoretically tighter relaxations in the literature (particularly rank-one SOCP and SDP formulations), these formulations face fundamental computational limitations. They require interior-point methods, which cannot be warm-started effectively and also scale poorly with problem size. Our perspective relaxation provides the optimal practical balance: it delivers sufficiently tight bounds while remaining computationally efficient through first-order methods. For your 2nd question, our method performs the best when applied to **large-scale** problems, on which existing lower bound computation methods scale poorly. In such cases, our approach provides substantial improvements in both speed and scalability. For small size problems, we observe that computing lower bounds is generally computationally cheap. In these scenarios, both our branch-and-bound implementation and commercial mixed-integer programming solvers can typically certify optimality within seconds. While these smaller cases serve as useful validation of our method's correctness, they are not the primary focus of our work, as they do not present the same computational challenges that motivate our technical contributions. We sincerely appreciate your thoughtful questions and engagement with our work. We hope to have answered to your concerns and questions and we would be happy to provide further explanations.
Summary: This paper considers the problem of sparse generalized linear model (GLM) optimization. Specifically, the goal is to fit a generalized linear model under the constraint that at most $k$ features are used. Typical approaches for such problem include LASSO, greedy/local search-based techniques, as well as branch-and-bound approaches. The authors focus on developing a fast algorithm for solving the convex relaxation that appears in each step of the branch-and-bound process. While the usual relaxation employed (e.g. in LASSO) replaces the $\ell_0$ sparsity constraint by an $\ell_1$ constraint, this relaxation is not necessarily tight. Instead, the authors employ convex duality to study the convex hull of the $\ell_0$-based discrete optimization set, and end up with a tighter but non-closed form constraint. This constraint essentially regularizes the top (largest-magnitude) features separately from the bottom ones, using an $\ell_2$ norm for the former but an $\ell_1$ norm for the latter. The authors show how to evaluate as well as optimize over this regularizer efficiently, leading to an overall efficient algorithm for solving the relaxation. Experiments show that this specialized algorithm outperforms off-the-shelf second-order cone programming solvers. Overall, I believe this is a good paper that provides a thorough and valid analysis of the sparse GLM problem. I believe it could be used to further move the state of the art by making branch-and-bound-based sparsity solvers more efficient. This would be significant for quality-sensitive applications. Claims And Evidence: I found all the claims presented to be valid. Methods And Evaluation Criteria: The method is evaluated on synthetic tasks, using both CPU and GPU, as well as two real linear/logistic regression datasets, which are highly over-parametrized. The experiments are performed multiple times to reduce variance. The results show a significant benefit of using the proposed method (the version with ranodm restarts) compared to off-the-shelf optimizers like gurobi. Theoretical Claims: All the theoretical claims look correct to me. I checked the proof of Theorem 3.6 and all the claims in the main body. Experimental Designs Or Analyses: N/A Supplementary Material: Proof of theorem 3.6 Relation To Broader Scientific Literature: The idea of solving sparse optimization with branch-and-bound has been studied before, e.g. "Sparse Branch and Bound for Exact Optimization of L0-Norm Penalized Least Squares" by Mhenni et al. However that work uses an $\ell_1$ relaxation. I am not aware of any work using a tighter relaxation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is well-written. The box constraint parameter $M$ is introduced but then ignored in Algorithm 2 (I guess it assumes $M=\infty$). It might also be implicitly assumed in other algorithms / theorems. It would be good for the authors to fix this. Other Comments Or Suggestions: - Add some comparison with ISTA. - An optional suggestion is to take a look at papers in the optimal transport / deep learning literature for similar approaches regarding a soft-top-k operator. https://arxiv.org/pdf/2002.06504 https://arxiv.org/pdf/2304.04947 At a high level they have the same goal of inducing tighter relaxations to the $\ell_0$ operator. Questions For Authors: - How do the results compare to ISTA (i.e. no acceleration)? - Is there any experimental evidence that the relaxation used here is superior to the $\ell_1$-based relaxation? It would be great to see at least one experiment or some pointers to other works that do that. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address their concerns below: 1. **Connection to Mhenni et al. and experimental comparison** Thank you; we will cite this paper. However, we note that our perspective relaxation is tighter than the $\ell_1$ relaxation from by Mhenni et al. Specifically, Lemma 2.1 establishes that the closed convex hull of \begin{align} \\{ ( \beta, z, t): ||\beta||_2^2 \leq t, z \in \\{0, 1\\}^p, 1^T z \leq k, ||\beta_j|| _{\infty}\leq M \\} \end{align} is \begin{align} \\{ ( \beta, z, t) : \sum _{j=1}^p \beta_j^2/z_j \leq t, z \in [0,1]^p, 1^\top z \leq k, -M z_j \leq \beta_j \leq M z_j ~~ \forall j \in [p] \\}. \end{align} This is the tightest possible relaxation for the $\ell_2$ term. The perspective relaxation we want to solve within the BnB algorithm is: \begin{align} \min_{\beta, z} || y - X \beta || _2^2 + \lambda_2 \sum _{j=1}^p \beta_j^2/z_j \quad \text{s.t.} \quad z \in [0, 1]^p, 1^T z \leq k, -M z_j \leq \beta_j \leq M z_j \end{align} In contrast, the $\ell_1$-based relaxation proposed by Mhenni et al. (Equation 2 in their paper and subsequent discussion) will solve: \begin{align} \min_{\beta, z} || y - X \beta ||_2^2 + \lambda_2 ||\beta||_2^2 \quad \text{s.t.} \quad ||\beta||_1 \leq k M, ||\beta|| _{\infty} \leq M \end{align} As Lemma 2.1 indicates, our perspective relaxation provides tighter lower bounds, crucial for pruning nodes and certifying optimallity. Below we provide some numerical comparison (same setup as Figure 2 in the paper): Linear regression (the higher the better): | method | p=1000 | p=2000 | p=4000 | p=8000 | p=16000 | | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | | l1 | 1088.78 | 2418.70 | 5518.70 | 11877.03 | 25710.79 | | perspective | 1119.53 | 2449.34 | 5549.40 | 11907.39 | 25741.67 | Logistic regression (the higher the better): | method | p=1000 | p=2000 | p=4000 | p=8000 | p=16000 | | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | | l1 | 202.12 | 485.82 | 1000.84 | 2216.15 | 4667.79 | | perspective | 234.34 | 518.99 | 1032.95 | 2248.64 | 4699.78 | 2. **Does Algorithm 2 only work when $M=\infty$?** Algorithm 2 is valid for any $M > 0$. In Algorithm 3, we first compute the proximal update (line 8), before evaluating $g(\beta ^{t+1})$ in Algorithm 2. Since $\beta ^{t+1}$ is the output of the proximal operator, it inherently satisfies all constraints in Equation (7). Namely, there exists some $z \in [0, 1]^p$ such that $1^T z \leq k$ and $-M z_j \leq \beta_j \leq M z_j \; \forall j \in [p]$. This property is critical to the proof of Theorem 3.6 (Line 896-897 and Line 902-903), where we show that Algorithm 2 correctly computes $g(\beta ^{t+1})$. We will clarify and emphasize this point in revision. 3. **Comparison with ISTA** If by ISTA the reviewer refers to the non-accelerated version of FISTA, the comparison is already included in Figure 3, as PGD (Proximal Gradient Descent) and ISTA are essentially the same algorithm. We used the more general term PGD since ISTA typically implies an $\ell_1$ regularizer. We would be happy to rename legend to ''PGD/ISTA'' in revision. Please let us know if we have misinterpreted your meaning regarding ISTA. 4. **Connection to optimal transport/deep learning literature for soft-top-k operator** Thank you; we will cite these two papers. However, the smooth approximation of the top-k operator is not suitable here. The effectiveness of the proximal algorithm relies on the exact evaluation of the proximal operator. Replacing the top-k would lead to solving a different problem rather than the proximal operator evaluation. This will not help us to use FISTA to solve the perspective relaxation. As a result, this approach would not guarantee valid lower bounds necessary for optimality certification. We hope to have answered to your concerns and questions, and we would be happy to provide further explanations.
null
null
null
null
Deep Neural Cellular Potts Models
Accept (poster)
Summary: The paper introduces NeuralCPM, a novel cellular Potts model (CPM) that employs a neural network to parameterize the Hamiltonian, diverging from traditional CPMs that rely on manually-defined, physics-inspired analytical Hamiltonians. This Neural Hamiltonian is designed to respect symmetries inherent in cellular dynamics, such as permutation and translation invariance, and can be trained directly on observational data. A key feature is its ability to integrate domain knowledge through a hybrid model, combining known biological mechanisms with the neural network to enhance biological realism and expressiveness. The authors evaluate NeuralCPM across three scenarios: synthetic data for parameter fitting of known Hamiltonians, the Cellular MNIST dataset to demonstrate increased expressiveness in modeling complex structures, and real-world bi-polar axial organization data from Toda et al. (2018) to showcase practical applicability. Main findings include NeuralCPM's superior performance over traditional CPMs in capturing complex cellular dynamics, achieving higher biological consistency and pattern fidelity, as evidenced by metrics like the Classifier Score (CS) and axial alignment RMSE. ## update after rebuttal The authors have clarified my concerns. I think this is a strong paper, as a result, I am increasing my score. Claims And Evidence: The primary claim is that NeuralCPM is more expressive and better at modeling complex cellular dynamics than traditional CPMs. This is supported by: - Synthetic Data Experiments: NeuralCPM accurately fits parameters of known analytical Hamiltonians for cell sorting, validated by convergence plots (e.g., Figures 4 and 8), showing rapid alignment with ground-truth values. - Cellular MNIST Experiment: The model generates digit-like cellular structures, with a higher CS (Table 1) than analytical baselines, indicating improved expressiveness. Qualitative results (Figure 5) further support this. - Real-World Data Experiment: NeuralCPM predicts bi-polar axial organization, aligning with laboratory observations (Figure 7), with a significantly lower axial alignment RMSE (37.3) compared to baselines (Table 2). The evidence is generally clear and convincing. Methods And Evaluation Criteria: The overall approach of learning the Hamiltonian that was previously manually defined analytically makes sense. The Neural Hamiltonian architecture is thoughtfully designed, using convolutional neural networks (CNNs) and permutation-invariant aggregation to respect cellular dynamics symmetries, which is apt for the problem. The hybrid model integrating domain knowledge is a sensible approach, enhancing interpretability and leveraging biological insights. The evaluation seems appropriate. However, for real-world data, direct comparison metrics between simulated and observed dynamics could further validate the approach. Theoretical Claims: This paper does not introduce new theoretical results. Experimental Designs Or Analyses: The authors use three different experiments. Overall, the designs are solid, with minor concerns addressed via questions below. Supplementary Material: The code is available in supplementary material. Relation To Broader Scientific Literature: NeuralCPM advances CPM research by integrating neural networks. Essential References Not Discussed: I don't think there is related work that is essential but not discussed in this paper. Other Strengths And Weaknesses: ### Strengths - Originality: Combining neural networks with CPMs is innovative and useful, removing the need for manual Hamiltonian design. - Significance: Improved modeling of complex biological systems could impact fields like cancer research. - Domain Knowledge Integration: The hybrid model enhances trust and interpretability for biologists. - Clarity: The paper is very well written and very clear. ### Weaknesses - Computational Complexity: MCMC sampling limits scalability, though future work is proposed. - Training Stability: Challenges with deep EBM training are noted but not fully resolved, affecting reproducibility. Other Comments Or Suggestions: - Typos: "classs" (Page 2, line 109) should be "class"; "desings" (Page 14) should be "designs". - Formatting: Ensure consistent figure captions (e.g., Figure 7a vs. 7(a)). Questions For Authors: Synthetic Data for Bi-Polar Experiment: Could you elaborate on how the synthetic training data for the bi-polar axial organization experiment was generated to reflect real-world dynamics? Specifically, how were cell motion preferences assigned, and what ensures their biological plausibility? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for the rigorous review. We are happy to learn that you appreciated the originality, significance and clarity of our work. We address your specific comments in our response below; any new tables and figures (referred to as e.g. Figure R1) can be found via https://anonymous.4open.science/r/neuralcpm-rebuttal-results-761F. ### General comments 1. **Direct comparison between simulated and observed dynamics:** In Figures 6 and 7b of the paper, we show a qualitative respectively quantitative comparison of the simulated dynamics and the observed real-world biological cell dynamics of Toda et al. Both figures show that the simulations of NeuralCPM align with the observations. To further emphasize this, we now include a new Figure R4, following the same procedure as Figure 7b, but now using simulations based on the Cellsort Hamiltonian. Comparing Figure 7b and R4, we see that NeuralCPM’s dynamics align substantially better with the observations of Toda et al. ### Weaknesses 2. **Computational complexity:** Indeed, MCMC sampling limits scalability to some extent, but we would like to stress that our current applications did not suffer from prohibitive computation times. To illustrate, the amortized time for a single simulation with the Neural Hamiltonian model was 44 seconds for cellular MNIST (Figure 5) and 62 seconds for bi-polar axial organization (Figure 6), and all models were trained for at most 24 hours. Additionally, as we briefly mentioned in the conclusion of our paper, we expect that exploiting a first order Taylor expansion to approximate the energy difference [1,2] can accelerate the computation substantially, since this can enable approximating the energy difference for many parallel spin flips at the cost of a single Hamiltonian evaluation. Moreover, recent advances in Deep EBM training (e.g. [3]) can also be applied to NeuralCPM in the future. 3. **Training stability:** NeuralCPM relies on EBM training techniques, which can be sensitive to hyperparameters. However, for our experimental scenarios, we found that adding the biological term to the Hamiltonian greatly stabilized training. This is because the biological term effectively acts as a ‘guardrail’ that constrains the MCMC simulation to a biologically reasonable manifold (e.g. cells have reasonable volumes and are contiguous objects). We further expect that ongoing research on training (discrete) EBMs (e.g. [1,2,3,4] as well as future works) can carry over to NeuralCPM training. ### Questions 4. **Clarification on synthetic data for bi-polar experiment:** To generate synthetic training data corresponding to Toda et al., we used cell motion along imposed trajectories such that cells of one type (‘red’) move from their random initial position into a central band and cells of the other type leave the central band towards opposing cluster borders. So by construction, we generate training data with a bi-polar axis. Additionally, to make sure the training data are isotropic, we randomly rotate the final configuration. In contrast, biological cells (and NeuralCPM) do not rely on any such prescribed trajectories, but update cell positions as a result of cell-cell interactions in the current configuration. However, the patterns that result from this procedure are comparable to those observed in Toda et al. (2018) in terms of bi-polar axial organization (see Fig. 7). Finally, our model needs only these final patterns for training, and a comparison of the model with the real observations of Toda et al. shows that the spontaneous symmetry-breaking dynamics of NeuralCPM align with the dynamics observed in Toda et al. (2018). ### Other comments or suggestions 5. **Typos and formatting:** We thank the reviewer for their detailed examination of the manuscript, and we will make sure to remove these errors in the updated version of the paper. ### References [1] Grathwohl et al. (2021). Oops I Took A Gradient: Scalable Sampling for Discrete Distributions. [2] Zhang et al. (2022). A Langevin-like sampler for discrete distributions. [3] Schröder et al. (2023). Energy discrepancies: a score-independent loss for energy-based models. [4] Sun et al. (2023). Discrete Langevin samplers via Wasserstein gradient flow.
Summary: In this thesis, we propose NeuralCPM, a neural network-based cellular Potts model (CPM), which aims to learn the dynamics of multicellular systems through neural Hamiltonian. The core idea of the method is to use Deep Energy-Based Model (EBM) to train Hamiltonian in order to overcome the problem of traditional CPM which relies on hand-designed energy functions. The paper claims that NeuralCPM is able to fit more complex cell behaviors and demonstrates its superiority by comparing it with experimental data. The paper provides several experiments to support these claims, including: 1. a synthetic data experiment to verify that NeuralCPM can learn the parameterized Hamiltonian and accurately simulate cellular MNIST data. 2. Cellular MNIST dataset experiments: demonstrate that NeuralCPM can generate morphology-specific cellular arrangements, such as number shapes. 3. real biological experiments (Toda et al., 2018): to validate the effectiveness of NeuralCPM in the bipolar axial organization task. Claims And Evidence: 1. The core methodology proposed in the paper, NeuralCPM and its Neural Hamiltonian, is conceptually clear and reasonable, especially the emphasis on the symmetry of the cellular system, the integration of domain knowledge, and the rational use of the deep energy model (EBM) framework, which is supported by sufficient theoretical basis, and the argumentation process is logical, rigorous, and clear, and the proposed theories and methodologies are credible and innovative in their own right. The proposed theories and methods are credible and innovative. 2. In terms of experiments, the validity and advantages of the method are verified through three different scenarios (parameter fitting, complex structure simulation, and real biological data application). The experiments are well-designed, and the quantitative metrics (e.g., RMSE, Classifier Score, etc.) can clearly reflect the performance of the model. The experimental results clearly show that the Neural Hamiltonian model outperforms the traditional analytical Hamiltonian model and the basic neural network architecture in simulating the complex cellular behaviors. However, the article mentions that “the CNN-based Hamiltonian produces unrealistic dynamics in the absence of substitution symmetry”, which suggests that the limitations of the chosen baseline approach are more obvious and may affect the comprehensiveness of the experimental comparisons. 3. despite the overall strong evidence, there are still some aspects of the paper that need further refinement. For example, the authors rely on synthetic data as the training set in the real biological data validation experiments, and the sample size of the real experiments is on the low side (only six repetitions of the experiments), and whether the resulting generalization ability is sufficient for generalization to other real systems still needs to be further verified. In addition, the stability of the training process still needs to be verified and illustrated in different scenarios. Methods And Evaluation Criteria: 1. The authors have selected reasonable evaluation metrics in their experiments, including biological consistency metrics for cell volume and fragmentation, and Classifier Score for quantifying the structural expressiveness of cell populations, and this evaluation clearly reflects the performance of the model. In addition, the authors use several experimental scenarios with different difficulties, including parameter fitting of theoretical Hamiltonian functions, complex image structures (Cellular MNIST) and real biological experiments (e.g., bipolar cellular organization experiments by Toda et al.), to demonstrate the generality and validity of the model, and in particular, the comparative validation with real experimental data enhances the credibility of the results. 2. However, some of the experiments in the paper have certain limitations, such as the simulation data constructed simplifies the complexity of real biological systems to a certain extent (e.g., there are discrepancies between the synthesized data and the real data in the bi-polar axial organization experiments), and may not be able to fully reflect the nonlinear and stochastic characteristics of the real cellular dynamics scenarios. In addition, there is a clear lack of selection of CNN baseline models, which makes it possible that some of the resultant comparisons may underestimate the capabilities of other potential architectures. Theoretical Claims: This paper does not contain an explicit proof of theory, so there is no need to check the correctness of the proof. The main contribution of the authors is to propose a deep learning method called NeuralCPM and to clarify the soundness of the theoretical framework by means of experiments and a symmetry constrained exposition. Experimental Designs Or Analyses: I thoroughly checked the soundness and validity of the experimental designs and analyses presented in the manuscript. The paper primarily conducts three sets of experiments: fitting known analytical Hamiltonians, simulating complex multicellular structures using Cellular MNIST data, and applying the model to a real-world biological dataset (bi-polar axial organization). The experimental design for parameter fitting using analytical Hamiltonians is sound and well-structured. The authors employed Root Mean Squared Error (RMSE) to quantitatively assess the accuracy of parameter estimation, clearly demonstrating the model’s capability in learning physically meaningful parameters from data. However, given the central role of the temperature parameter in CPM dynamics, the experimental setup could benefit from further clarification on temperature sensitivity and robustness. In the Cellular MNIST and bi-polar axial organization experiments, the authors selected appropriate metrics, such as cell volume, fragmentation, and Classifier Score, to capture both biological realism and structural complexity effectively. These metrics convincingly evaluate both cellular-level and collective-level model performance. However, there are some concerns: firstly, the choice of baseline models is limited—particularly the CNN baseline lacks permutation symmetry, leading to unrealistic dynamics. Including more suitable baselines or justifying these choices explicitly would strengthen the comparative validity of results. Secondly, in the bi-polar axial organization scenario, synthetic training data may oversimplify biological complexity, and the potential gap between synthetic and real-world data could be more explicitly discussed. Addressing these concerns would enhance the robustness and generalizability of the experimental validation. Supplementary Material: I reviewed the supplementary material provided in the manuscript, focusing on the sections related to data generation, implementation details, and training procedures (Appendices A and B). The supplementary materials effectively support reproducibility and clarify methodological decisions. However, the simplifications in synthetic data generation and the assumptions inherent in sampling strategies could benefit from additional explicit analysis and discussion of implications on generalizability and biological realism. Relation To Broader Scientific Literature: None Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: Q1. In the bi-polar axial organization experiment, synthetic data generated by Morpheus were used for training. Could you elaborate on how well these synthetic data represent real-world complexity? Specifically, how sensitive is your model to the simplifications made during data generation? Clarifying this would strengthen confidence in your method’s applicability to real biological scenarios. Q2. You introduced an approximate sampler to accelerate the MCMC dynamics. Could you explain more explicitly how deviations from detailed balance affect your results? Have you empirically or theoretically evaluated how these deviations impact the final distributions obtained by your trained models? Addressing this point would help assess potential biases introduced by the approximation. Q3. Baselines such as the CNN-based Hamiltonian lacked permutation symmetry, potentially limiting their comparative value. Could you justify the selection of these baseline models? Alternatively, did you consider other relevant baselines that respect the symmetries in cellular systems? This clarification is important to fully understand the relative advantages of your proposed architecture. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thorough review! We appreciate that you found our methodology clear, innovative, and rigorous. We address your specific comments in our response below; any new tables and figures (referenced as e.g. Figure R1) can be found via https://anonymous.4open.science/r/neuralcpm-rebuttal-results-761F. ### General comments 1. **Temperature sensitivity:** Since we can always rescale the Hamiltonian in the CPM, we can assume without loss of generality that the temperature equals 1, and train the Hamiltonian for that temperature. The typical effect of $T$ roughly corresponds to the strength of the fluctuations in the system over time [1]; however, it does not substantially affect the behavior of the system at equilibrium, and a particular Hamiltonian will result in qualitatively similar self-organized patterns for varying temperatures. ### Questions 2. **Use of synthetic data for bi-polar axial organization experiment:** To generate synthetic training data corresponding to Toda et al., we used cell motion along imposed trajectories such that cells of one type (‘red’) move from their random initial position into a central band and cells of the other type leave the central band towards opposing cluster borders. So by construction, we generate training data with a bi-polar axis. Additionally, to make sure the training data are isotropic, we randomly rotate the final configuration. Biological cells (and NeuralCPM) do not rely on any such prescribed trajectories, but update cell positions as a result of cell-cell interactions in the current configuration. Still, we argue that the synthetic final self-organized states are statistically comparable to those observed in Toda et al. (2018) in terms of bi-polar organization, shown in Figure 7a. As the training process only relies on these final states, the simplifications we made in the data generation do not necessarily cause adverse effects to the model quality. We test this in Figure 7, where NeuralCPM’s simulations are shown to align very well with the real data observations of Toda et al. Of course, in general one should take such a synthetic approach with care, and in this light this experiment should be seen as a proof-of-principle towards more complex biological applications. We will update our paper to include an explicit discussion about this point. 3. **Approximate parallel CPM sampler and deviations from detailed balance:** Notably, the simulation dynamics from the standard CPM already lack detailed balance [2]. Though the approximate parallel sampler can theoretically suffer from synchronization issues when parallel spin-flips affect each other (see e.g. [3]), we stress that it should be regarded as a surrogate sampler to accelerate training; other established training techniques for EBMs, like Persistent Contrastive Divergence [4], also successfully sacrifice unbiased sampling for accelerated training, as unbiased maximum likelihood estimation is generally intractable. 4. **Lack of permutation symmetry in CNN baseline:** Based on your suggestion, we now added an additional GNN baseline which enjoys permutation symmetry to further strengthen our experimental evaluation, in addition to other ablations (see point 1 in our response to reviewer KQ4z). The quantitative and qualitative results are shown in Table R2 and Figures R2+R3 respectively. The results indicate that the Neural Hamiltonian outperforms the GNN baseline in terms of self-organization (CS and alignment RMSE metrics), emphasizing the importance of inductive biases for lattice data, e.g. translation symmetry and locality, in addition to permutation symmetry. ### References [1] Glazier and Graner (1993). Simulation of the differential adhesion driven rearrangement of biological cells. [2] Durand and Guesnet (2016). An efficient Cellular Potts Model algorithm that forbids cell fragmentation. [3] Sultan et al. (2023). A parallelized cellular Potts model that enables simulations at tissue scale. [4] Tieleman (2008). Training restricted Boltzmann machines using approximations to the likelihood gradient.
Summary: This paper introduces Neural Hamiltonians, a novel approach for parameterizing the Hamiltonian function in cellular Potts models. The authors propose using neural networks to learn the Hamiltonian function directly from data, while preserving important physical and biological constraints. The key innovation is a formulation that respects critical symmetries inherent to cell populations - specifically, invariance to cell permutations and lattice translations. This ensures the learned models maintain biologically plausible behavior regardless of how cells are indexed or positioned. Additionally, the authors present a hybrid framework that combines traditional symbolic Hamiltonians (which incorporate known biological principles) with neural network-based Hamiltonians (which can capture complex patterns from data). The approach includes learnable weights that balance the contribution of these two components, allowing the model to leverage both domain knowledge and data-driven insights. ## update after rebuttal After examining the authors' rebuttal, I can confirm that they have adequately addressed my concerns. Therefore, I have upgraded my recommendation to Accept. Claims And Evidence: See the strengths and weaknesses below. Methods And Evaluation Criteria: See the strengths and weaknesses below. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental methodology and analytical approach used in this work are fundamentally sound and well implemented. The authors have come up with a reasonable framework for evaluating their Neural Hamiltonian method. While the overall experimental design is sound, there are several specific observations and suggestions regarding these experiments that I have detailed in the sections below. Supplementary Material: I have not examined the supplementary materials in detail. Relation To Broader Scientific Literature: See the summary. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** - The paper is well-written, with clear explanations of complex concepts that make the work accessible despite the interdisciplinary nature of the research. - The authors' approach of using neural networks to parameterize Hamiltonian functions in cellular Potts models represents an interesting innovation. This technique appears to be both effective and mathematically sound. - The Neural Hamiltonian framework successfully captures complex non-local cell interactions that traditional analytical methods struggle to model, at least as could be observed in the cellular MNIST experiments. This capability can be a substantial advancement in the field of computational biology and multicellular modeling. - The combination of symbolic (biologically-informed) and neural network-based Hamiltonians is particularly valuable. This can be an elegant balance between incorporating established biological knowledge and leveraging data-driven insights. **Weaknesses** - The baseline models included in the experimental comparisons are somewhat simplistic. For instance, the CNN baseline is predictably ill-suited for the task, making its poor performance unsurprising. - The connection between the synthetic datasets (particularly cellular MNIST) and real cellular dynamics remains vague. The paper does not sufficiently justify how success on these artificial benchmarks translates to biological validity or relevance. - The experiments with real biological data are notably limited in scope and depth. These experiments fail to fully demonstrate the potential advantages of Neural Hamiltonians in addressing meaningful biological questions. More extensive validation with diverse real-world cellular data would strengthen the paper's claims and better highlight the practical utility of the method. - The paper would benefit from more detailed ablation studies to isolate and quantify the contribution of each component of the proposed framework, particularly regarding the hybrid approach and symmetry constraints. Other Comments Or Suggestions: The abbreviation CNN is used once in the text before using the full term. Questions For Authors: - How well does the cellular MNIST dataset reflect the complexity and characteristics of real cell population dynamics? Could you elaborate on the specific biological phenomena that this synthetic benchmark captures, and what limitations it might have in representing actual cellular behaviors? - Could you provide the visualization referenced in Figure 7b for the cellsort Hamiltonian? This would help me better understand the comparative performance of your method in this specific scenario. - For the first experiment, which components of the Neural Hamiltonian design are necessary for accurate parameter estimation? Have you conducted ablation studies to determine how removing or modifying certain elements affects the overall performance? - Given the novelty of your architecture, it would be valuable to see a systematic ablation study on the architectural components and their impact on experimental outcomes. Specifically: How crucial is the cell interaction CNN to the overall performance? What is the specific contribution of the pooling layers in capturing cellular dynamics? How does removing symmetry constraints affect model performance? What is the optimal balance between the symbolic and neural components in the hybrid approach? - How does the computational complexity of training and inference with Neural Hamiltonians compare to traditional approaches? Is there a significant trade-off between model expressivity and computational requirements? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful review. We are glad that you found our work well-written and sound, and that you appreciated the innovations of our approach. Our response can be found below; new results (referenced as e.g. Figure R1) can be found via https://anonymous.4open.science/r/neuralcpm-rebuttal-results-761F. ### Weaknesses 1. **Baselines and ablations:** We now added three additional baselines and ablations: 1) a GNN Hamiltonian, which enjoys permutation symmetry but lacks translation symmetry and inductive biases for lattice data; 2) an ablation of the Neural Hamiltonian where cells cannot interact by removing the cell-interaction CNN in each layer, masking out any information from neighboring cells; and 3) an ablation of the Neural Hamiltonian where we remove the local pooling step in all NH layers. We show the results for experiments 2 and 3 in Table R2 and Figures R2 and R3. For the bi-polar axial organization experiment, the ablation of removing the pooling in the NH layers diverged during training (remaining hyperparameters identical to NH+closure), and hence we do not report these data. The overall results show that the paper's Neural Hamiltonian + closure design consistently outperforms all baselines in terms of the converged multicellular patterns. 2. **Biological relevance of synthetic data:** We agree that the cellular MNIST data is not directly biologically relevant. Instead, for this experiment we aimed to design a scenario with cellular structures that are complex enough to be poorly approximated by a symbolic model, yet simple enough to facilitate straightforward (visual) validation of model quality. In a broader biological context, the relevance of this experiment is to test NeuralCPM’s ability to model structures beyond the capabilities of traditional CPMs, which indicates our method's potential to bring real biological use cases such as wound healing, cell migration, and embryo formation within reach of CPM simulations. 3. **Scope of biological experiments:** We only evaluated one real biological use-case in this paper, as our goal was to introduce and validate the core concepts of NeuralCPM before moving to more extensive and complex biological case studies. Still, we argue that the results on this scenario sufficiently demonstrate NeuralCPM’s potential for future applications, as the CPM formalism is very versatile (evidenced by hundreds of CPM models of biological processes in public repositories: https://artistoo.net/examples.html, https://morpheus.gitlab.io/#examples, https://compucell3d.org/Simulation%20Movies), and can now be derived from data with NeuralCPM. 4. **Ablation study:** Please see point 1 above. ### Questions 5. **MNIST dataset:** Please see point 2 above. 6. **Visualization for Cellsort Hamiltonian:** Please see Figure R4. For type-2 cells, there is reasonable overlap, implying that the degree of collinearity along the principal axis of type-2 cells is similar to the NeuralCPM case. However, the type-1 cells do not align, meaning that the Cellsort model fails to ‘squeeze’ them along the orthogonal axis, which NeuralCPM successfully learned. 7. **Hamiltonian components:** The goal of the experiment is to investigate if the NeuralCPM training framework can recover unknown parameters in symbolic Hamiltonians. To this end, the Hamiltonian consisted only of symbolic terms and randomly initialized parameters. We now also added additional results to investigate the robustness of this approach; please see point 1 of our response to reviewer FiGo for a discussion. 8. **Ablation study:** Please see point 1 above. 9. **Computational complexity:** Due to the Neural Hamiltonian, NeuralCPM requires more computational effort than traditional CPMs with symbolic Hamiltonians. The amortized time for a simulation with the Neural Hamiltonian model was 44 seconds (Figure 5) and 62 seconds (Figure 6), while the Cellsort Hamiltonian + External potential, which exploits locality assumptions for an efficient calculation, required 0.3 and 0.5 seconds respectively. So, although the Neural Hamiltonian is more expensive, the simulation times remain manageable. Further, approximating the energy difference with a first order Taylor expansion [1,2] can further accelerate the computation since this enables approximating the energy difference for multiple spin flips at the cost of a single Hamiltonian evaluation. So, using such a technique to evaluate 10+ spin flips in parallel could give an order of magnitude speedup, offering a promising path for scaling NeuralCPM to tissue-scale simulations. Finally, all training runs took at most 24 hours, and the tradeoff in terms of relative time per training step is comparable to inference. ### References [1] Grathwohl et al. (2021). Oops I Took A Gradient: Scalable Sampling for Discrete Distributions. [2] Zhang et al. (2022). A Langevin-like sampler for discrete distributions.
Summary: This paper introduces NeuralCPM, a neural network-based approach to cellular Potts modeling (CPM) for simulating collective cell dynamics. Traditional CPMs rely on physics-inspired Hamiltonians that require significant domain expertise to design and may not fully capture complex biological behaviors. NeuralCPM parameterizes the Hamiltonian with a neural network architecture specifically designed to respect symmetries in cellular dynamics, allowing it to be trained directly from observational data. The framework also enables the integration of known biological mechanisms by combining analytical Hamiltonians with the neural network in a hybrid model. The authors validate their approach through experiments on synthetic data and real-world multicellular systems, demonstrating that NeuralCPM can model cellular dynamics that traditional analytical Hamiltonians cannot capture. Claims And Evidence: The first experiment convincingly demonstrates that NeuralCPM can accurately recover parameters from cell sorting models, validating the learning algorithm. The second experiment with the Cellular MNIST dataset shows that NeuralCPM can capture complex spatial arrangements that analytical Hamiltonians cannot, with quantitative metrics supporting this claim. The final experiment applying the approach to real biological data demonstrates practical utility. Methods And Evaluation Criteria: The experimental methodology is sound, with appropriate metrics to evaluate both biological realism (cell volume, fragmentation) and the ability to capture complex collective behaviors (classifier score, axial alignment). The authors properly acknowledge limitations and discuss hyperparameter sensitivity. Theoretical Claims: The paper contains few formal theoretical claims requiring proof verification. I checked the mathematical formulation of the permutation invariant neural Hamiltonian architecture in Section 3.1, which appears correct in its construction using permutation-invariant aggregation functions. Experimental Designs Or Analyses: For the parameter recovery experiment (Section 4.2), the methodology is valid but could be more rigorous. The authors correctly generate synthetic data using known parameters and evaluate RMSE between learned and ground truth values. However, the rapid convergence shown in Figure 4 suggests this might be a simplified test case. The experiment would be more convincing with diverse initial conditions and parameter regimes. In the Cellular MNIST experiment (Section 4.3), the classifier score metric is innovative but potentially problematic - success at recreating visually recognizable digits may not indicate biologically meaningful dynamics. The complementary biological metrics (volume and fragmentation) help address this concern, making the overall analysis approach sound. For the bi-polar organization experiment with real biological data (Section 4.4), the axial alignment variance is an appropriate domain-relevant metric. However, the experimental design lacks statistical testing across multiple simulation runs to establish the significance of the reported improvements. Additionally, while the training instability issue with pure neural models is honestly reported, more analysis of this limitation would strengthen the paper. Additionally, including neural competing methods could further strengthen the validity of this experiment. Supplementary Material: I reviewed the supplementary material, which provides details on data generation, model implementation, and additional results that support the paper's claims. Relation To Broader Scientific Literature: The paper positions NeuralCPM within the intersection of energy-based models from machine learning and cellular Potts models from computational biology. The authors correctly identify the gap in the literature - while neural networks have been used to accelerate physical simulations, their application to improving the expressivity of cellular dynamics models is novel. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths:** - The Neural Hamiltonian architecture is thoughtfully designed to respect fundamental symmetries (permutation, translation) in cellular systems - The hybrid approach combining analytical and neural components is particularly valuable, improving both biological realism and training stability - The method requires only observations of self-organized states for training, not full trajectories - The validation against real biological data demonstrates practical utility beyond synthetic examples **Weaknesses:** - The approximate sampling compromises theoretical foundations - Architecture design choices lack sufficient justification and ablation studies - Limited to equilibrium dynamics, excluding many biological processes - No clear pathway for scaling to larger cellular systems - Training energy-based models can be unstable, as evidenced by the divergence of some models in the bi-polar sorting experiment Other Comments Or Suggestions: No Questions For Authors: - How computationally expensive is NeuralCPM compared to traditional CPMs, both for training and inference? This information would be valuable for potential users considering adopting this approach. - How sensitive is the model to the choice of temperature parameter T in the CPM? Since you noted this is "poorly identifiable from static snapshots," have you explored whether NeuralCPM could learn an appropriate effective temperature automatically? - For the bi-polar axial organization experiment, you mention that Neural Hamiltonian models without the closure term failed due to "fast divergence during training." Could you elaborate on specific techniques you tried to stabilize these models before resorting to the hybrid approach? - Does the training instability of pure neural Hamiltonians indicate a fundamental limitation of the approach? What modifications might address this? - What other neural competing methods might be applicable to the microscopy time-lapse data by Toda et al. (2018)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thorough review and valuable suggestions. We are glad to hear that you found our Neural Hamiltonian well-designed and that you appreciated the validation against real biological data. We address all comments and questions of your review below; new tables and figures (referred to as e.g. Table R1) can be found through https://anonymous.4open.science/r/neuralcpm-rebuttal-results-761F. Due to the character constraint, we refer to responses to other reviewers where indicated. ### General comments 1. **More initial conditions and parameter ranges for parameter recovery experiment:** We now also tested the approach on more parameter regimes, based on Type D and Type F cell sorting of [1]. Further, we tested five different random initializations of the parameters of the Hamiltonian to assess the robustness of the method. The results, shown in Table R1 and Figure R1, demonstrate that the parameters robustly converge to values close to the ground-truth for all four scenarios. 2. **Statistical testing for bi-polar axial organization:** We now added a statistical analysis of the NH + closure model compared to all baseline models as follows: we take 5 groups of 10 simulation runs, calculate the axial alignment RMSE as in Table 2 for each group, and then test whether the mean error of the NH + closure model is significantly lower than the respective baseline. We used Welch’s t-test, and applied a Bonferroni correction to control for multiple testing. The results, shown in Table R3, indicate that the mean error is significantly lower than all baseline models. ### Weaknesses 3. **Approximate parallel sampling:** Please see point 3 of our response to reviewer ZLYN. 4. **Architecture design choices, ablation studies, additional baselines:** Please see point 1 of our response to reviewer KQ4z for a discussion on the newly added baselines and ablations. 5. **Going beyond equilibrium dynamics:** NeuralCPM can be extended by also conditioning the Hamiltonian on earlier observed states, and training on full trajectory data. This can be interpreted as learning the energy difference, instead of the absolute value. We already conducted preliminary experiments with this approach that gave promising initial results, but they are not yet ready for publication, and we leave this extension for future work. 6. **Scaling to larger systems:** Please see point 9 of our response to reviewer KQ4z. 7. **Training stability:** See point 10 below. ### Questions 8. **Computational requirements:** Please see point 9 of our response to reviewer KQ4z. 9. **Sensitivity to temperature $T$:** Since we can always rescale the Hamiltonian in the CPM, without loss of generality, we can assume that the temperature parameter $T$ equals 1, and train the Hamiltonian for that temperature. The typical use of $T$ roughly corresponds to a parameter that determines the strength of the fluctuations in the system over time [2]; however, it does not substantially affect the behavior of the system at equilibrium. 10. **Techniques to stabilize training, and whether this is a fundamental limitation:** We applied established best practices from the literature for mitigating training instabilities when training discrete EBMs: We used Persistent Contrastive Divergence [3], similar to [4], and we regularized the Hamiltonians during training (Section 3.2). We also tried different architectural configurations during preliminary experiments, e.g. network depth. Although these techniques are relevant for training EBMs in general, adding the symbolic biology-informed term to the Neural Hamiltonian effectively mitigated training stability issues for our use-case. This is due to the biological terms functioning as ‘guardrails’ for the simulation, preventing biologically unrealistic behaviors. Finally, discrete EBMs are an active field of research, and recent advancements (e.g. [4,5,6]) are also applicable to NeuralCPM. 11. **Other neural competing methods:** To the best of our knowledge, there are no earlier proposed neural methods that apply to the cellular dynamics data of Toda et al. and rely only on equilibrium data. ### References [1] Edelstein-Keshet and Xiao (2023). Simplified Cell Sorting — Morpheus contributed examples. https://morpheus.gitlab.io/model/m2007. [2] Glazier and Graner (1993). Simulation of the differential adhesion driven rearrangement of biological cells. [3] Tieleman (2008). Training restricted Boltzmann machines using approximations to the likelihood gradient. [4] Grathwohl et al. (2021). Oops I Took A Gradient: Scalable Sampling for Discrete Distributions. [5] Zhang et al. (2022). A Langevin-like sampler for discrete distributions. [6] Schröder et al. (2023). Energy discrepancies: a score-independent loss for energy-based models.
null
null
null
null
null
null
Prediction-Powered Adaptive Shrinkage Estimation
Accept (poster)
Summary: This paper proposes a "shrinkage estimator" for estimating multiple problems at once using PPI (prediction powered inference). They discuss the various methods by which one can reduce variance in such an estimator and how their method takes advantage of each, demonstrating theoretically that they can get better estimators by optimizing the CURE (correlation based risk estimate) which they introduce. Empirically they show on synthetic and real datasets that this improves on multi-estimation over both classical and PPI baselines. ######### UPDATE AFTER REBUTTAL: I remain positive about this paper - keeping my score at 4. Claims And Evidence: I find the claims + evidence shown in this paper to be convincing. The authors provide both theoretical and empirical evidence for their claim, which is that PAS is an improved estimator over PPI in the multi-prediction setup. I find the background and intuition given in sections 2-3 to be quite helpful. Methods And Evaluation Criteria: Yes, the evaluation setup is fairly standard for PPI and adapted to their specific setting (multi-prediction). It would be nice to see experiments shown on different sizes of labelled data to demonstrate at what value of n we start seeing improvements from PAS. Theoretical Claims: I did not check closely but I did look at the proofs in Appendix A. Experimental Designs Or Analyses: fairly standard setup and showing improved risk on two metrics Supplementary Material: I looked at the proofs in Appendix A although I did not check them carefully Relation To Broader Scientific Literature: connection to existing literature is well laid out in this paper - discusses very clearly in Section 3 exactly how various methods from the PPI and Bayesian space connect to this work Essential References Not Discussed: n/a Other Strengths And Weaknesses: Main critique: I would love a little more intuition about what “parallel problems” are supposed to be, in particular early on in exposition and then tied to the examples in 2.2 and methods in 4.2. I was a little unclear on how "borrowing information across problems" is done (as stated on line 42). I can see there is shrinkage going on but that is calculated on what seems to be a mostly problem-by-problem level as in (15). So a little more explanation here would help my understanding of what's going on - for instance I don't know see why we need the machinery from Eq 5 of defining the meta-distribution over problem parameters, it doesn't seem there's any assumption made on how problems are connected or distributed. Quick note on the assumption of correlation/covariance statistics being known - Var(Y) and corr(Y, f) can be tough to estimate is labelled data is small. Would be good to discuss how this is implicitly an assumption that labelled data is big enough, and/or what regime of n you expect this estimator to work well in. Other Comments Or Suggestions: L204: “cl” in classical is underlined? L170 right: typo on “unlabeled” Relatedly to the "sharing information" point, I would be curious to know if there's any type of covariate shift type assumption that you think is helping here- for instance, it is roughly (not exactly) satisfied in the synthetic example Ex 2.2. Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for considering our idea convincing and thoughtful comments. * **Sharing information across problems:** We agree with the reviewer that further motivation of why and how it is possible to share information across problems will improve our exposition. In our revision we will add further motivation. Below is an alternative attempt to further explain and elaborate on Prop 5.3. Suppose as in that proposition that $n_j$ is the same across all problems, $N_j=\infty$, and that second moments $\rho_j,\tau_j,\sigma_j$ are identical across all problems. We make these assumptions throughout the remainder of our response. Then we could ask: What is the best convex combination of $\hat{\theta}_j^{PT}$ and $\tilde{Z}_j^f$ in the following sense: $$\omega_j^*\in\mathrm{argmin}_{\omega_j}\mathbb E[(\theta_j-(\omega_j\hat{\theta}_j^{PT}+(1-\omega_j)\tilde{Z}_j^f)^2].$$ By direct calculation (since the RHS is a convex quadratic in $\omega_j$) we find that: $$\omega_j^*=\frac{\mathbb E[(\theta_j-\tilde{Z}_j^f)^2]}{\mathbb E[(\theta_j-\tilde Z_j^f)^2]+\tilde\sigma^2}.$$ This implies the following intuitive result: the larger the MSE $\mathbb E[(\theta_j-\tilde{Z}_j^f)^2]$, the less weight we should assign to $\tilde{Z}_j^f$. Note that if we have a single problem $m=1$, then if $n_j$ is sufficiently large, we could estimate $\tilde{\sigma}^2$ accurately. However, we cannot estimate $\mathbb E[(\theta_1-\tilde{Z}_1^f)^2]$ accurately since we only have a single $\theta_1$ (a single problem). At best, we can compute an unbiased estimate of this quantity since: $\mathbb E[(\hat{\theta}_1^{PT}-\tilde{Z}_1^f)^2]-\tilde{\sigma}^2=\mathbb E[(\theta_1-\tilde{Z}_1^f)^2].$ Now suppose we have multiple problems, then we can now learn how good the ML predictor is for estimating the $\theta_j$ by sharing information across problems. To wit, as $m \to \infty$ $$\frac{1}{m}\sum_{j=1}^m(\hat{\theta}_j^{PT}-\tilde{Z}_j^f)^2-\tilde\sigma^2\stackrel{\mathbb P}{\to}\mathbb E[(\theta_j-\tilde Z_j^f)^2],$$ where we emphasize that the mean squared on the RHS also integrates with respect to the **meta-distribution** $\mathbb P_{\eta}$ that models the distribution of the $\theta_j$. Thus we can also estimate the optimal $\omega_j^*$. In words: **sharing information across problems allows us to learn how good the ML predictor is** and to then decide how much to shrink toward it. Our implementation shares information in a similar way but is more involved due to the heteroscedasticity across problems (not all second moments are identical) and also for heterogeneity in $n_j$, $N_j$. * **Role of meta-distribution:** See above. We note that we could state our result in a frequentist setting wherein all parameters are deterministic, similar to the classical result of [James-Stein](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator). We preferred to state results in terms of a meta-distribution, since we thought this would be a more familiar setup for audience at ICML. * **Assumption on known second moments:** We agree that our assumption on second-moment parameters $(\sigma^2_j, \gamma_j)$ are not easily satisfied in practice. In Sec 2 of paper, such treatment is more of a theoretical convenience, and sample-based estimates are used for real-world datasets. Therefore, it is true that an implicit requirement for our theory of PAS to work well in many practical settings is that the sample-based estimates are good enough. While our regime ($m\to\infty$, $n_j,N_j$ finite) cannot yield asymptotic result on these sample-based estimates, we make the following remarks: 1. In practice, PAS works well even when $n_j$ is very small (and so the estimates are noisy). To highlight this point further and motivated by your suggestion, we reran our real data examples with different labeled/unlabeled splits, going from 1%-40%. PAS still dominates other baselines even when only 1% of the data is labeled (e.g. $n_j < 10$ for Amazon Review). [[Link to the plots]](https://doi.org/10.6084/m9.figshare.28694657.v1) 2. In our response to Reviewer Tcxk, we propose a new variant of PAS called UniPAS, whose asymptotic guarantee does not require knowledge of second moments. UniPAS also has competitive empirical performance with PAS. * **Other comments:** * L204: The underline is intentional to denote the abbreviation "cl" for classical estimator. We can clarify this. * L170: Thanks for catching the typo; we will fix it. * Covariate Shift: We interpret this as potential differences in $P_{\eta_j}(X_{ij})$ across problems $j$. In the synthetic model, the mean of $X_{ij}$ varies with $\eta_j$, but that is not an assumption that PAS makes. What makes "information sharing" more effective, as mentioned above, is when $(\theta_j-\tilde{Z}_j^f)^2$ is small on average across problems. If the question refers to train/test covariate shift within problems, this violates the PPI assumption. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I find these clarifications helpful - the connection of E[theta - Z] to information shrinking as well as to the distributional assumptions piece are both useful for me. The experiments on small n and with UniPAS are also nice! I'm already positive about this paper and continue to be so.
Summary: This paper proposes a method for adaptively combining ML predictions with gold-standard labels, to estimate a multivariate parameter (e.g., the mean across several partitions of the data) with small mean-squared error. The paper builds upon the PPI++ estimator, while proposing to additionally perform global shrinkage using an adaptively chosen shrinkage parameter. Experimental results show that the approach approves over the classical estimator in MSE more often than a series of baseline methods. **Update after rebuttal**: I have reviewed the response and will maintain my score. Claims And Evidence: The claimed contributions in the introduction are (in my opinion) well-scoped and well-supported, clearly distinguishing asymptotic claims from finite-sample claims, and justifying the benefits of the method with empirical evidence. Methods And Evaluation Criteria: The method certainly makes sense, and has a clear intuitive basis. Regarding the evaluation criteria, the theoretical development is mainly (in my view) for intuition, given some limitations (e.g., knowledge of certain parameters, see "other strengths and weaknesses"). So I view the empirical evaluation as the main "evaluation" component, where the evaluation approach appears sound to me---I particularly appreciated the use of the "percentage of problems improved" metric as a thoughtful counter to typical concerns with shrinkage-type estimators. Theoretical Claims: Unfortunately, I did not check the proofs in the supplement due to a lack of time. Experimental Designs Or Analyses: As stated in "methods and evaluation criteria", the evaluation appears sound to me, mainly focusing on the real data analysis which I believe presents the most robust empirical evidence for the method. Given that the focus of this paper is on estimation (and not inference / uncertainty quantification), it is reasonable to measure performance by MSE, and I appreciated the extra inclusion of the "% improved" metric, since (as the authors note) MSE across an entire vector can be improved while sacrificing performance on some dimensions. Supplementary Material: While I briefly skimmed the supplement, I did not read any particular section in depth. Relation To Broader Scientific Literature: As noted in part by the authors, there is a long line of work on combining imputed labels (which may be biased) with gold-standard labels. PPI++ is one such idea, where PPI constructs an unbiased estimator using imputed and gold-standard labels, while "power tuning" is added in PPI++ to improve efficiency by estimating the optimal degree of reliance on imputed labels. Shrinkage is another idea, which can be shown to provably improve estimation (in a particular sense, namely MSE across the entire vector of parameters) when there are 3 or more parameters to estimate, an idea going back to the James-Stein estimator. This shrinkage idea has also been applied in (other) areas of combining "potentially biased data" with "unbiased data", such as in causal inference, where the "biased" data is observational and the "unbiased" data experimental (see Rosenman et al. 2023, cited among other shrinkage-style estimators in this work). In my understanding, this paper brings together these two lines of work with perhaps an additional twist (e.g., adopting a particular Empirical Bayes perspective on shrinkage, which I am less familiar with), and demonstrates that they work well together empirically, while giving some theoretical intuition. Essential References Not Discussed: I believe much of the relevant related work is cited, but I would appreciate more discussion of how the proposed approach relates to some of the (cited) alternatives, e.g., Fisch et al. 2024 (who similarly considered stratified PPI++ across different subpopulations) and Rosenman et al. 2023 (who similarly consider shrinkage towards a biased predictor in a related causal inference problem). For instance, does the "Shrinkage" baseline correspond to the method of Rosenman et al. 2023 or some variant of that method? Other Strengths And Weaknesses: The paper is very clearly written, the synthetic experiments are a nice intuition pump, and the real-data experiments were compelling in my view. However, a few points of clarifications / "weaknesses": 1. How is the PPI++ baseline implemented in the experiments? As currently presented, I imagine that PPI++ is implemented (separately?) for each dimension of the target parameter to estimate. However, PPI++ has more general formulations than mean estimation, e.g., one could view the present problem as a more general M-estimation problem, or even the task of learning a generalized linear model with a fixed coefficient for each stratum, both of which are covered in Sections 4 and 3 respectively of [1]. I'm not sure that actually results in a different estimator from the formulation presented here, but it would be helpful to clarify. 2. As noted in Section 2.2., all of the theoretical work starts from the assumption that $\tau_j, \sigma_j, \rho_j$ are known. However, a major challenge in this area of research is that these must be estimated from the (same) limited data that we have for estimating $E[Y_{ij}]$. Typically this deficit is handled somewhat crudely by appealing to asymptotics (i.e., convergence of similar terms in probability to their true values), but I don't see any of that here. Of course, the ultimate validation is empirical, but I'm curious how the authors would address this weakness in the theoretical development: For instance, do Proposition 5.1 and Theorem 5.2 still hold as both $m$ (number of tasks) and $n$ (number of samples) tend to infinity? 3. I didn't follow the analog between the main method derived in Section 4 and the exposition in Section 3. For instance, it seems in Section 3 that $\phi$ and $\theta$ are known to be correlated with the same mean, and in any case we should expect that $E(\theta \mid \psi)$ is a better predictor of $\theta$ than $E(\theta)$. Moving from Equation (12) to Equation (16), we use $\tilde{Z}_j^f$ in place of $E[\theta \mid \psi]$ by "analogy", but I don't understand the conceptual connection, given that e.g., $\tilde{Z}_j^f$ could be arbitrarily biased, while $E(\theta \mid \psi)$ is not. [1] A. N. Angelopoulos, J. C. Duchi, and T. Zrnic. Ppi++: efficient prediction-powered inference. arXiv preprint (2311.01453v2), 11 2023. URL: http://arxiv.org/abs/2311.01453v2, arXiv:2311.01453v2. Other Comments Or Suggestions: As a minor note, I found the use of distinct notation in Section 3 and Assumption 2.1 to be confusing, and it was a little hard for me to see the connections between the two. Questions For Authors: My questions are essentially derivative from my comments above, but to recap (in priority order): 1. How is the PPI++ baseline implemented in the experiments, and in particular, how does the "multivariate" form of PPI++ compare to the PT baseline shown here? Is the multivariate form of PPI++ essentially the same, but with only a single fixed parameter lambda instead of a per-coordinate parameter? 2. Where do existing shrinkage methods fit into the baselines shown in the experiments? For instance, does the "Shrinkage" baseline correspond to the method of Rosenman et al. 2023 or some variant of that method? 3. As noted in Section 2.2., all of the theoretical work starts from the assumption that $\tau_j, \sigma_j, \rho_j$ are known. For instance, do Proposition 5.1 and Theorem 5.2 still hold as both $m$ (number of tasks) and $n$ (number of samples) tend to infinity, i.e., is there at least an asymptotic argument that this does not matter? 4. Could you explain the conceptual reason why the analog in Equation (16) of $\tilde{Z}_j^f$ to $E(\theta \mid \psi)$ makes sense? I think I get the intuition at a very rough level (it's additional "prior" information), but see my comments above in "other strengths and weaknesses" Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for a very helpful report, which has helped us improve on this work. * We agree that we should expand on connections to related work, which will be added as a new section in the appendix after our revision. Briefly: - Fisch et al. 2024 (StratPPI): The starting point is similar to ours with multiple parameters $\theta_1,...,\theta_m$. However, in StratPPI, these parameters are not of interest per-say. Instead there exists known weights $w_1,...,w_m$ such that the parameter of true interest is $\theta := \sum_{j} w_j \theta_j$. The ultimate goal is to come up with an unbiased and low variance estimator of $\theta$. - Rosenman et al (2023): When their stratum weights are the same, their estimator corresponds to our shrinkage baseline up to differences in the one-dimensional family of shrinkage weights (our Eq. (15)). - PPI++: Suppose $n_j=n, N_j=N$ for all problems, then indeed we can cast our setting into PPI++ with a parameter vector. Moreover, PT is then using the same $\lambda$ for all problems (more of which below). * Previously, we were comparing to PPI++ applied separately to each problem (with its own $\lambda_j$). PPI++ itself requires asymptotics with $n_j,N_j\to\infty$. It would be possible to extend to asymptotics with $m,n\to\infty$; however, based on the reviewer comments we now have a more compelling methodological alternative: if we only seek to compare ourselves to PPI++ with a single $\lambda$, then we can learn the optimal $\lambda$ in asymptotics with $n$ fixed, $m\to\infty$ (a result complementary to the PPI++ paper which takes $m$ fixed and $n\to\infty$). In closed form, we have $$\lambda^*:=\frac{\sum_{j=1}^mn_j^{-1}\gamma_j}{\sum_{j=1}^m\frac{n_j+N_j}{n_jN_j}\tau_j^2}$$ which coincides with the optimal weight selected by PPI++. We further clip $\lambda^*$ to be within $[0, 1]$. Without knowing $\tau_j, \sigma_j, \gamma_j$, we can still plug in their sample-based estimates (see Point 2 for Reviewer Tcxk) and obtain $\hat\lambda$ (after clipping it as well), which has the property that $\hat\lambda - \lambda^* \overset{L^2}{\to} 0$ as $m\to\infty$ (this is different from the per-problem case when $n_j \to \infty$ is needed). We denote this method as UniPT. * Taking UniPT as a starting point, we develop UniPAS. This has an asymptotic guarantee ($n_j, N_j$ fixed, $m\to\infty$) analogous to PAS replacing PT with UniPT (that is, we try to be asymptotically at least as good as PPI++ with a single $\lambda$, while PAS was trying to at least as good as PPI++ with a separate-per problem $\lambda$). The upshot is that UniPAS **does not** require knowledge of $\tau_j, \sigma_j, \gamma_j$. In empirical results UniPAS is competitive with PAS (although PAS is slightly better). * In addition to including two new methods (UniPT, UniPAS), we have reran our real data examples with labeled/unlabeled split ratios ranging from 1% to 40%. [[Link to plots]](https://doi.org/10.6084/m9.figshare.28694657.v1) * On the analogy of $Z_j^f$ and $\mathbb E[\theta\mid\psi]$: We agree that the analogy is somewhat loose and we will provide more details in the revision. Our goal is to provide a heuristic motivation for the one-dimensional parameterized family of weights $\omega_j(\cdot)$ (whose ultimate success is judged by the empirical results). To elaborate on this: in Eq. (12) of Sec 3, we find that the best weights take the form: $$\frac{\omega}{\omega+\sigma_{\varepsilon}^2},$$ for $\omega = \mathbb E[(\theta - \mathbb E[\theta \mid \phi])^2] = \mathbb E[\mathrm{Var}[\theta \mid \phi]]$. If we instead take the best convex combination of $\hat{\theta}^{cl}-\xi$ and $\mathbb E[\theta]$ (instead of $\mathbb E[\theta\mid\phi]$), the optimal weights again take the form above, now with $\omega=\mathbb E[(\theta-\mathbb E[\theta])^2] = \mathrm{Var}(\theta)$. Now suppose that we ask for the best convex combination (not necessarily a Bayes predictor) between $\hat{\theta}^{cl}-\xi$ and $h(\phi)$ where $h(\cdot)$ is some fixed function. Then we can show that the optimal convex combination is given by the form above with $$ \omega:=\mathbb E[(\theta-h(\phi))^2] = \mathbb E[\mathrm{Var}[\theta\mid\phi]] + \mathbb E[(h(\phi)-\mathbb E[\theta\mid\phi])^2]. $$ (The above expression is interesting as it forces us to inflate "$\omega$", i.e., to shrink less toward $h(\phi)$ in a way that depends on how close $h(\phi)$ is to $\mathbb E[\theta\mid\phi]$.) The takeaway is that for many possible predictors, the optimal weights have the same parameterized form up to the single parameter $\omega$ that varies according to the quality of the predictor. This motivates our one-dimensional family of weights. Once this family has been motivated, we learn $\omega$ in (15) in a way that does not depend on the above analogy at all by minimizing CURE. --- Rebuttal Comment 1.1: Comment: I appreciate the clarifications and new comparisons (which I believe should go into the main paper), and I'm glad my review was helpful for improving upon the work. I will retain my score, but I'm still positive on the paper (somewhere between a 3 and 4)
Summary: The paper proposes the Prediction-Powered Adaptive Shrinkage (PAS) method to enhance estimation accuracy for multiple means. PAS integrates Prediction-Powered Inference (PPI) with empirical Bayes shrinkage, first debiasing noisy machine learning (ML) predictions within each task and then leveraging information across tasks for improved estimation. Theoretically, the authors establish the asymptotic optimality of the tuning strategy and prove that PAS achieves a lower asymptotic mean squared error (MSE) than any baseline estimator. Experimental results on synthetic and real-world datasets demonstrate that PAS consistently outperforms existing methods. Claims And Evidence: Please see my comments for each category below. Methods And Evaluation Criteria: The proposed approach leverages Prediction-Powered Inference (PPI) and empirical Bayes methods to enhance statistical estimation, which I find to be a novel contribution. The theoretical guarantee of asymptotic optimality and the strong performance in numerical experiments further support its effectiveness. However, the proposed approach's performance is expected to depend heavily on the quality of the predictor used. If I understand the mechanism correctly, PAS does not have a built-in mechanism to correct or exclude misleading predictors, potentially impacting its effectiveness. Additionally, the computational complexity may be high due to the power tuning and adaptive shrinkage steps, which require optimization and could introduce additional overhead. Theoretical Claims: Although the asymptotic optimality of PAS provides a strong theoretical guarantee, one of my main concerns is the assumption of finite moments. The authors state that “PAS inherits the flexibility of PPI in working with any black-box predictive model and makes minimal distributional assumptions about the data.” However, the boundedness of moments may not always hold. Experimental Designs Or Analyses: The numerical experiments demonstrate the superior performance of the proposed method compared to the baselines. However, given that PAS's performance critically depends on the quality of the predictor, it would be interesting to analyze its sensitivity to heavily biased predictors—for example, by intentionally using a biased predictor to observe its impact. Additionally, since PAS involves power tuning and adaptive shrinkage, the computational cost is expected to be high. A detailed comparison of computational efficiency, such as running time benchmarks, would be helpful to assess the trade-off between accuracy improvement and computational expense. Supplementary Material: Yes, I briefly went through the proof sketch of the theorem, but there is a possibility that I may have missed something. Relation To Broader Scientific Literature: This paper could contribute to estimation, particularly for parallel ML tasks, which look for accurate estimation. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: How does the violation of the assumption of boundness of finite moment affect the optimality? How does PAS perform when the ML model is biased, thereby the estimation is incorrect? If there is a way to screen and exclude those poor estimations? If one of the estimations is nearly optimal (close to the true value), is the shrinkage method over-shrink it? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the novelty and contribution in our paper, as well as the many constructive comments. We hope our response below addresses all the concerns directly. ### **Background on PPI** The core idea of Prediction-Powered Inference (PPI) is as follows. Given existing black-box ML predictors $f$, how can we use them to enhance classical statistical procedures and improve their efficiency? The focus in this literature is on employing safeguards so that if the ML predictor is good, then statistical gains can be large, while if the ML predictor is bad, then one still retains some form of statistical guarantees (such as consistency). The focus on this literature is not on how to train the best possible ML models (most results are agnostic to that), but rather how to wrap around an existing arbitrary $f$ to improve statistical procedures. In this paper, we follow this established perspective in the PPI literature. Here, *arbitrary* typically means that $f$ can be of any functional form or a total black-box (e.g., API calls to LLMs), and should thus be considered given and fixed. While adapting to the quality of $f$ is important, an “arbitrarily bad” $f$ is rarely considered in practice, since prediction-powered methods are generally deployed only when $f$ is expected to provide at least some signal about the response. Nevertheless, PAS **does** include mechanisms to adapt to both poor and high-quality predictors. To demonstrate this, we revisit the synthetic model and consider the following predictors: 1. a (newly added) very poor predictor $f_r$, which outputs a $Unif[0, 2]$ value for all inputs, 2. the near-optimal predictor $f_1(x) = x^2$ from the paper (in fact, $f_1$ closely approximates $E[Y_{ij} | X_{ij} = x]$). **Table 1: MSE (± se) $\times 10^{-3}$ in synthetic model ($K=200$ replicates)** ||Classical|Prediction Avg|PPI|PAS| |-|-|-|-|-| |$f_r$|3.14 ± 0.03|549.17 ± 2.57|24.05 ± 0.22|3.14 ± 0.03| |$f_1$|3.14 ± 0.03|0.27 ± 0.00|2.69 ± 0.03|0.27 ± 0.00| In the extreme case where $f_r$ is very biased and provides no useful information, PAS effectively defaults to the classical estimator, showing robustness against poor predictions. On the other hand, with a very good $f_1$, PAS tracks the predictive mean closely and perform best. Importantly, no over-shrinkage is observed. In words, PAS learns how good the predictor is in a data-driven way. ### **Bounded moment assumptions** We appreciate the reviewer's question regarding the finite moment assumptions. We first remark that these assumptions are satisfied when the response and prediction are bounded (as in the Amazon and Galaxy datasets), or when their joint distribution is reasonably well-behaved (as in our synthetic setting). We note that *some* moment assumptions are standard across the PPI literature. Specifically, finite **second moments** of the joint model $(Y_{ij}, f(X_{ij}))$ are generally needed for basic procedures like power tuning. These are typically viewed as mild assumptions, although we admit that they preclude heavy-tailed distributions. Our work extends PPI to compound mean estimation using shrinkage principles, particularly risk minimization via an unbiased estimate (CURE). To prove the asymptotic optimality of PAS, we require finite fourth moments. This is a technical condition used to control the variance of the risk estimate itself. Finally, addressing the reviewer's question about violations: if the fundamental second moment assumption fails, the variance-reduction premise of PPI itself becomes ill-defined. If only the fourth moment assumption is violated, PAS may lose its formal asymptotic guarantee to outperform power tuning and the predictive mean. However, CURE remains an unbiased risk estimate provided that second moments exist. ### **A light-weight approach** A key strength of PAS is its low computational overhead compared to both classical and PPI estimators. Although PAS uses both power tuning and adaptive shrinkage, the first stage yields a closed-form expression for the optimal tuning parameter $\lambda_j^*$; the second stage involves optimizing CURE over a one-dimensional parameter space, and each evaluation is inexpensive since CURE has an analytic form as well. This makes the optimization very fast in practice. We first precompute all ML predictions. After this step, all estimators (including PAS) are very fast to compute. Below we benchmark the runtimes for three estimators on the Amazon Review dataset with the precomputed ML predictions. In each trial, the task is to estimate the mean product ratings for $m = 200$ products. We then record the time to construct each estimator by taking the average of 100 repeated constructions. The table reports the mean and max time (in milliseconds) to construct each estimator across 10 such trials. |Estimator|Mean Time (ms)|Max Time (ms)| |-|-|-| |Classical|7.4|8.4| |PT|21.2|22.0| |PAS|34.5|35.4|
Summary: Paper proposes prediction-powered adaptive shrinkage (PAS), an extension of prediction-powered inference (PPI) that uses empirical Bayes ideas to further reduce estimates when multiple related estimation problems are solved together. The paper is well written and well thought out, with good theoretical and empirical results. Claims And Evidence: The claims are well supported by both theory and empirical results. The paper is well written and explains the ideas clearly with good intuition building examples. Methods And Evaluation Criteria: Yes. Evaluation on both synthetic and real world examples demonstrate the efficacy of the method. Theoretical Claims: Theoretical claims are asymptotic in nature, holding as the number of problems m->infty. They also assume the variance parameters tau_j, sigma_j, rho_j are known. While paper refers to this as of secondary concern, and referred to existing papers that took the same approach, this would seem to me to be quite a significant assumption and would be good to work out implications of having to estimate these from data on the theoretical developments. Experimental Designs Or Analyses: The experimental design looks sensible, with clear separation of datasets used to estimate the different quantities needed (e.g. the Bert fine-tuning). It is nice to see the method working across both textual and image domains with different predictors etc. Supplementary Material: Skimmed through proofs. Relation To Broader Scientific Literature: The relation to broader literature is clearly set out through the paper. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: 1. What assumptions or methods might be required to strengthen the asymptotic results to a finite m one? 2. It would be good to more explicitly work out what happens if the variance parameters tau_j, sigma_j, rho_j are unknown and need to be estimated, and the impact of this additional estimation step on the resulting method and theoretical results. 3. Please elaborate on how these variance parameters are estimated from data in the empirical results (particularly the Amazon and Galaxy Zoo datasets). 4. It would be nice to look at performance of the method and baselines for smaller n_j/N_j, and also for a wider range of the label/unlabeled split beyond 20-80. The method should be able to work with a wider range. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the careful assessment. The thoughtful questions have helped us improve our work. For better exposition, we slightly reordered our responses to the questions. 1. **Finite $m$ results.** Our current theoretical analysis permits for finite sample bounds. We will track these results more explicitly in the revision. For instance, the $o(1)$ term in Prop 5.1 is actually $O(1/\sqrt{m})$. 2. **How are variance parameters estimated in practice?** For real-world datasets, we use sample-based estimates for the variance parameters. We estimate $\sigma^2_j$ and $\gamma_j$ with the standard unbiased estimators $$ \hat{\sigma}^2_j = \frac{1}{n_j-1} \sum_{i=1}^{n_j} (Y_{ij} - \bar Y_j)^2, \quad \hat\gamma_j = \frac{1}{n_j-1}\sum_{i=1}^{n_j}(Y_{ij}-\bar Y_j)(f(X_{ij}) -\bar{Z}_j^f) $$ using labeled data. For the prediction variance $\tau^2_j$, we use predictions in both labeled/unlabeled data: $$ \hat\tau_j^2 = \frac{1}{n_j + N_j -1}\bigg(\sum_{i=1}^{n_j}(f(X_{ij}) - \hat Z_j^f)^2 + \sum_{i=1}^{N_j}(f(\tilde X_{ij}) - \hat{Z}_j^f)^2 \bigg) $$ where $\hat Z_j^f = \frac{1}{n_j + N_j -1}\left(\sum_{i=1}^{n_j}f(X_{ij}) + \sum_{i=1}^{N_j}f(\tilde X_{ij})\right)$. 3. **Impact of estimating variance parameters (with Point 2).** We agree with the reviewer that our current theory does not account for estimation errors due to plugging in sample-based estimates of the variance parameters. One possible remedy would be to consider asymptotics in which all of $n_j, N_j, m \to \infty$. However, we prefer asymptotics with $n_j, N_j$ fixed and $m \to \infty$ to represent the regime in which we have a lot of very noisy/difficult individual problems and also to distinguish results from standard setup in the PPI literature that keeps $m$ fixed and takes $n_j, N_j \to \infty$. We note PAS with plug-in variance estimates empirically works well even for very small $n_j$, see point 4 below. Motivated by the reviewer's comment, we now consider two further methods: UniPT (that is, power-turning with the same $\lambda$ for all problems, see response to reviewer qLFS) and UniPAS, which is similar to PAS but uses the same power-tuning parameter across problems. We can prove that UniPAS has asymptotic ($m \to \infty$, $n_j,N_j$ fixed) risk less or equal to that of UniPT, PPI, the classical estimator, and the prediction mean. **The result for UniPAS accounts for the data-based estimation of the variance parameters**. Here is a sketch of UniPAS: - We start with UniPT by estimating a single power-tuning parameter $\hat{\lambda}$ (as in our response to reviewer qLFS) that has the property that $\hat{\lambda} - \lambda^* \to 0$ in $L^2$ as $m \to \infty$ (with $n_j,N_j$ fixed) where $\lambda^*$ is the optimal single tuning parameter. Call the resulting estimator $\hat{\theta}_j^{UniPT}$. - We come up with stable **working** estimates of $Var[\hat{\theta}_j^{UniPT}]$ by **pretending** $\sigma_j^2, \tau_j^2, \gamma_j$ are the same across all $j$ (but sample sizes can vary). The common values can be estimated accurately by averaging the estimates in Point 2 over all $m$. Plugging these in, we derive working estimates $\bar{\sigma}_j^2$ of $Var[\hat{\theta}_j^{UniPT}]$. Then we consider the one-dimensional parameterized family of weights $$ \bar{\omega}_j(\omega) = \frac{\omega}{\omega + \bar{\sigma}_j^2}.$$ This family retains the property that for $\omega=0$ it recovers prediction mean and for $\omega=\infty$ it recovers UniPT. - Pretend momentarily that $\bar{\sigma}_j$ and $\hat{\lambda}$ are deterministic. (This is not actually needed; by steps 1 and 2 above asymptotically these converge to a deterministic limit.) Suppose we consider the family of estimators $$ \bar{\omega}_j(\omega)\hat{\theta}_j^{UniPT} + (1-\bar{\omega}_j(\omega))\tilde Z_j^f. $$ An unbiased estimate of risk is given by CURE, $$\frac{1}{m}\sum_{j=1}^m (2\bar{\omega}_j(\omega) - 1)Var[\hat{\theta}_j^{UniPT}] + 2(1 - \bar{\omega}_j(\omega)) Cov[\hat{\theta}_j^{UniPT}, \tilde Z_j^f] +(1 - \bar{\omega}_j(\omega))^2\big(\hat{\theta}_j^{UniPT} - \tilde Z_j^f\big)^2,$$ which is analogous to the expression of CURE for PAS. Now here comes the punchline. If above we replace $Var[\hat{\theta}_j^{UniPT}]$ and $Cov[\hat{\theta}_j^{UniPT}, \tilde Z_j^f]$ by unbiased estimates (analogous to Point 2), then we still retain an unbiased estimate of risk. Moreover, since we are averaging over $m$ with $m \to \infty$, we can establish asymptotic uniform consistency of our objective to the true risk, and thus we can establish asymptotic optimality for UniPAS. 4. **Different $n_j/N_j$ and split ratios.** Following the reviewer's suggestion we reran our real data analyses with the new methods and different labeled/unlabeled splits, going from 1%-40%. [[Link to the plots]](https://doi.org/10.6084/m9.figshare.28694657.v1)
null
null
null
null
null
null
MF-LAL: Drug Compound Generation Using Multi-Fidelity Latent Space Active Learning
Accept (poster)
Summary: The paper introduces a novel framework for multi-fidelity active learning that is based on learning of hierarchical latent representations. The authors evaluate the approach in a multi-fidelity setting culminating in ABFE evaluation on two different protein targets. Claims And Evidence: The claims are supported by evidence, with some questions regarding the experimental evaluation and presentation of the results (below). Methods And Evaluation Criteria: Used multi-fidelity setting, in particular the choice of oracles, is convincing, and the method is evaluated in two different protein targets, which further strengthens the results. The main issue lies in the number of tested molecules (discussed in more detail below). Theoretical Claims: N/A Experimental Designs Or Analyses: Discussed below. Supplementary Material: No. Relation To Broader Scientific Literature: The paper builds on earlier work on multi-fidelity generative models, including MF-GFN, and expends on some of its perceived limitations. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The problem setting is of (in my opinion) high significance, and the authors thoroughly evaluate their proposed method, including some ablations and comparison with reasonable baselines. The setup of oracles also reflects real-life setting, which is another strength of the paper. The biggest weakness, in my opinion, is however also related to this. Because the authors opt to use a setting close to real-life, with MD as the highest-fidelity oracle, they are limited to sampling very few samples in the end (15 per method). This is somehow understandable given high computational cost, but limits the reliability of the results. I wonder if it would be possible to instead do a simulated multi-fidelity setup, as in some earlier papers; or perhaps to pick a lower-cost highest-fidelity oracle. I do not expect the authors to necessarily address this in any way, since it would require a substantial rework of the experiments, but raising my concern about the reliability of the presented results. About actionable weaknesses, I do have the following comment: “For MF-LAL and the most competitive baseline for each target, we ran an additional 25 compounds” - this seems to me to be a bizarre design decision, since the authors compare, among other things, the number of discovered modes (“scaffolds”). This obfuscates the results, in particular since for the scaffolds, the authors perform statistical analysis only for two best methods (the information about which is hidden away in the appendix), which might be misleading. I would strongly suggest limiting the results in this table to the same budget of oracle calls (15 each), and perhaps having a separate table / figure for 40 compound setting. Other Comments Or Suggestions: - “We will focus on “query synthesis” approaches (Angluin, 1988), where the model generates its own queries to send to the oracles, speeding up learning compared to approaches that query oracles with samples from a fixed candidate set.” - I’d be curious to see some citation backing up this claim (that’s also more recent). - “Ensembled AutoDock4 (cost: 44s, ROC-AUC BRD4(2): 0.80, c-MET: 0.80)” - one cost is provided for both targets, while c-MET seems to use 5 crystal structures, whereas the other uses 8; could authors explicitly confirm that the cost is supposed to be the same in both cases? - “Additionally, the pairwise Tanimoto similarity among the 40 compounds generated by MF-LAL is less than 0.2” - does this refer to mean pairwise similarity? Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful feedback and positive comments about the work. > **Q1:** Because the authors opt to use a setting close to real-life, with MD as the highest-fidelity oracle, they are limited to sampling very few samples in the end (15 per method). This is somehow understandable given high computational cost, but limits the reliability of the results. **A1:** We would like to emphasize the statistical tests in Section 4.3 and Appendix B.3, where we show that the ABFE scores of MF-LAL compounds are significantly better than those from baselines in terms of both mean and top scores for both targets. So while we do have a relatively small number of samples, we show that MF-LAL makes statistically significant and reliable improvements over baselines. > **Q2:** I wonder if it would be possible to instead do a simulated multi-fidelity setup, as in some earlier papers; or perhaps to pick a lower-cost highest-fidelity oracle. **A2:** While many previous works have indeed focused on a simulated multi-fidelity setup, we do not think that such a setup is a very good proxy for real-world simulators. This is because the method of ABFE calculation is fundamentally different from docking or other activity prediction techniques, and so it is difficult to substitute it with other cheaper techniques and still reach useful conclusions. If it is possible to reach statistically significant conclusions using the real-world MD simulator, which we believe to have done in this paper, then we think it is most valuable to use the real simulator instead of an artificial setup. > **Q3:** ...for the scaffolds, the authors perform statistical analysis only for two best methods (the information about which is hidden away in the appendix), which might be misleading. I would strongly suggest limiting the results in this table to the same budget of oracle calls (15 each), and perhaps having a separate table / figure for 40 compound setting. **A3:** We agree that making two separate tables for 15 and 40 compounds would increase the clarity of the results. In the updated draft, we will include these two tables, as well as statistical tests comparing MF-LAL to all applicable baselines in both settings. As noted in Appendix B.3, statistical tests comparing MF-LAL to all baselines where all methods were limited to 15 compounds still showed statistically significant results in favor of MF-LAL (except for the binomial test for the number of active scaffolds between MF-LAL, which generated 4 scaffolds, and REINVENT, MF-AL-PPO, and DecompDiff which generated 1 scaffold, where p=0.07). The results for MF-LAL and the top baseline for each target using only 15 compounds are shown below, because they were not included in the original draft. These results remain consistent with our conclusions about the strong performance of MF-LAL. | | BRD4(2) ABFE | | | | | | c-MET ABFE | | | | | | | --- | --- | ---| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Method | Mean $\pm$ std | # active scafs | Count | 1st | 2nd | 3rd | Mean $\pm$ std | # active scafs | Count | 1st | 2nd | 3rd | | MF-AL-PPO | | | | | | | -4.2 $\pm$ 2.8 | 0 | 15 | -6.6 | -5.8 | -5.5 | | Pocket2Mol | -4.3 $\pm$ 3.8 | 1 | 15 | -9.8 | -8.7 | -8.0 | | | | | | | | MF-LAL | **-6.2** $\pm$ 3.9 | **6** | 15 | **-12.0** | **-10.2** | **-9.8** | **-6.7** $\pm$ 3.1 | **4** | 15 | **-12.9** | **-7.9** | **-7.7** | > **Q4:** I’d be curious to see some citation backing up this claim [about query synthesis] (that’s also more recent). **A4:** [1, 2, 3, 4] are recent papers that explore query synthesis approaches, and many of them find that query synthesis outperforms traditional pool-based methods in active learning. We will include these citations in the updated draft. [1] Morand et al. “Efficient Exploration of Microstructure-Property Spaces via Active Learning.” Frontiers in Materials 2022. [2] Schumann and Rehbein. “Active Learning via Membership Query Synthesis for Semi-Supervised Sentence Classification.” CoNLL 2019. [3] Guo et al. “Dual generative adversarial active learning.” Applied Intelligence 2021. [4] Zhu and Bento. “Generative Adversarial Active Learning.” arXiv 2017. > **Q5:** “Ensembled AutoDock4 (cost: 44s, ROC-AUC BRD4(2): 0.80, c-MET: 0.80)” - one cost is provided for both targets, while c-MET seems to use 5 crystal structures, whereas the other uses 8; could authors explicitly confirm that the cost is supposed to be the same in both cases? **A5:** Thank you for pointing out this omission, the shown cost is indeed only for BRD4(2). The correct cost for c-MET is 68s, which we will include in the updated draft. > **Q6:** “Additionally, the pairwise Tanimoto similarity among the 40 compounds generated by MF-LAL is less than 0.2” - does this refer to mean pairwise similarity? **A6:** Yes, mean pairwise similarity. We will update the draft to clarify.
Summary: This paper introduces Multi-Fidelity Latent space Active Learning(MF-LAL), an framework that integrates a set of different oracle functions to guide the generation of molecules to get higher predicted activity. It combines the generative model and surrogate model into a single framework, and the computational cost is reduced with active learning method. MF-LAL is able to achieve around 50% improvement in binding free energy score for the molecules generated compared to baseline methods, especially for two disease-relevant proteins (BRD4(2) and c-MET). Claims And Evidence: The main claim of this paper is the effectiveness of the proposed MF-LAL framework, which surpass other baseline methods in molecule binding free energy optimization task. Here are some concerns: 1. In table 1, POCKET2MOL and MF-LAL generated 40 molecules while other baseline methods generated 15 molecules. It seems that this is unfair as generating more molecules would definitely result in better ABFE scores for the top-3 molecules and more active scaffolds. But the increase in mean value is solid, so I think the author should show the separated results mean value for 40 molecules ; number of active scaffolds and top-3 values for 15 molecules. 2. Another concern is that authors use Pocket2Mol and DecompDiff as baseline model. But they are 3D pocket-based molecule generation models instead of optimization models. There are some 3D optimization models, like, DecompOpt[1] and TagMol[2]. Also RGA[3] can be included 3. The case shown in Figure 2 are not very good. There are some uncommon or weird structures in the generated molecules. [1] Zhou, Xiangxin, et al. "DecompOpt: Controllable and Decomposed Diffusion Models for Structure-based Molecular Optimization." The Twelfth International Conference on Learning Representations. [2] Dorna, Vineeth, et al. "TAGMol: Target-Aware Gradient-guided Molecule Generation." ICML'24 Workshop ML for Life and Material Science: From Theory to Industry Applications. [3] Fu, Tianfan, et al. "Reinforced genetic algorithm for structure-based drug design." Advances in Neural Information Processing Systems 35 (2022): 12325-12338. Methods And Evaluation Criteria: ## Method The method proposed is interesting. It is tring to integrate multi-fidelity surrogate functions to guide the molecule generation. It introduces a Hierarchical Latent Space Representation to optimize the latent space at each fidelity and decode the molecule at highest fidelity. It also includes an active learning step. Overall I think the method is simple but effective and makes sense for the problem. ## Evaluation The evaluation is done on two targets. The concern is that the number of targets is too limited, but it is understandable as ABEF is very time-consuming and expensive. Theoretical Claims: There is no theoretical claims in this paper. Experimental Designs Or Analyses: The experimental design is reasonable. It is done on two cancer-relevant proteins, BRD4(2) and c-MET. Also, ABFE are well-validated on those targets and have good agreement with experimental data. As previously discussed, the issues are the limited number of targets, and the unmatched number Supplementary Material: no Relation To Broader Scientific Literature: I think the scope of this paper is molecule optimization for better binding energy. It does not relate to the broader scientific literature. Essential References Not Discussed: As previously stated, 3D optimization models, like RGA[1], DecompOpt[2] and TagMol[3] should be referenced and add to baselines. [1] Fu, Tianfan, et al. "Reinforced genetic algorithm for structure-based drug design." Advances in Neural Information Processing Systems 35 (2022): 12325-12338. [2] Zhou, Xiangxin, et al. "DecompOpt: Controllable and Decomposed Diffusion Models for Structure-based Molecular Optimization." The Twelfth International Conference on Learning Representations. [3] Dorna, Vineeth, et al. "TAGMol: Target-Aware Gradient-guided Molecule Generation." ICML'24 Workshop ML for Life and Material Science: From Theory to Industry Applications. Other Strengths And Weaknesses: ## Other Strengths 1. The paper is well written, with clear description of methods and experiment results. 2. The method is simple but effective ## Other Weaknesses 1. some 3d based optimization models are not included in the baseline 2. tested targets are limited 3. when comparing the top-3 values, it should also generating 15 molecules as for other methods for fair comparasion Other Comments Or Suggestions: 1. If possible, it would be benificial to have figure 3 shown in the main text as it gives a more clear illustration of the framework. Questions For Authors: 1. From the ablation it seems that even the Linear regression fidelity contributes to the performance. Removing it would results in significant decrease of number of active scaffolds from 4 to 0. The ABFE scores for top 3 molecule also significantly decreased. Can you explain more on why this happens? My understanding is that it should have some kind of influence but should not be that obvious, as ABFE is much accurate than a linear regression model. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful feedback and positive comments about the work. > **Q1:** In table 1, POCKET2MOL and MF-LAL generated 40 molecules while other baseline methods generated 15 molecules. It seems that this is unfair as generating more molecules would definitely result in better ABFE scores for the top-3 molecules and more active scaffolds. But the increase in mean value is solid, so I think the author should show the separated results mean value for 40 molecules ; number of active scaffolds and top-3 values for 15 molecules > when comparing the top-3 values, it should also generating 15 molecules as for other methods for fair comparasion **A1:** Based on this and other comments from reviewers, we will make two separate tables in the updated draft reporting results on both 15 (for all methods) and 40 compounds (for the methods that have them). See the response to Reviewer QauA for the MF-LAL and baseline results on only 15 compounds, including the top compound scores and the number of active scaffolds, which remain consistent with our conclusions about the strong performance of MF-LAL. > **Q2:** Another concern is that authors use Pocket2Mol and DecompDiff as baseline model. But they are 3D pocket-based molecule generation models instead of optimization models. There are some 3D optimization models, like, DecompOpt[1] and TagMol[2]. Also RGA[3] can be included **A2:** Thank you for the references, we will cite them in the updated draft. We agree that comparing MF-LAL to a 3D optimization model would be valuable, so we are currently running TAGMol as a baseline. While the results are not yet finished due to computational cost, we will post them when they are finished during the author-reviewer discussion period. Following the reviewer’s suggestion, we have also run RGA. We ran RGA using a single fidelity (docking) as the oracle, similar to how we implemented the SF-VAE (only docking) and REINVENT (only docking) baselines in our paper. We chose to use docking, instead of ABFE, as the single fidelity oracle due to computational cost and the presumed inability of a genetic algorithm to make use of a very small number of ABFE oracle calls. We used the default parameters of RGA, with random ZINC250k compounds as the starting population for the genetic algorithm. The results from RGA are as follows: | | BRD4(2) ABFE | | | | | | c-MET ABFE | | | | | | | --- | --- | ---| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Method | Mean $\pm$ std | # active scafs | Count | 1st | 2nd | 3rd | Mean $\pm$ std | # active scafs | Count | 1st | 2nd | 3rd | | RGA (only docking) | -3.1 $\pm$ 3.9 | 0 | 15 | -7.8 | -7.0 | -6.8 | -2.1 $\pm$ 3.0 | 0 | 15 | -6.0 | -5.5 | -5.4 | | MF-LAL | **-6.3** $\pm$ 3.7 | **8** | 40 | **-12.0** | **-11.3** | **-10.2** | **-7.1** $\pm$ 3.0 | **6** | 40 | **-13.9** | **-12.9** | **-7.9** | The results from MF-LAL are also shown for comparison. MF-LAL significantly outperforms RGA, with the latter having performance similar to that of the REINVENT baseline. We will include these results in the updated draft. Finally, we do not think DecompOpt is a good baseline to compare with because it requires knowledge of existing binders for generation (to compute the “reference arms”). Since MF-LAL and our other baselines are de novo generation methods that do not use knowledge from existing binders, we do not think a comparison with DecompOpt would be fair. > **Q3:** tested targets are limited **A3:** The number of targets we tested is limited by the high cost of ABFE calculations. In addition, the ABFE framework we use is only validated and configured for a few targets, and BRD4(2) and c-MET are the only of those targets that are of interest from a biological/drug discovery perspective. > **Q4:** From the ablation it seems that even the Linear regression fidelity contributes to the performance. Removing it would results in significant decrease of number of active scaffolds from 4 to 0. The ABFE scores for top 3 molecule also significantly decreased. Can you explain more on why this happens? My understanding is that it should have some kind of influence but should not be that obvious, as ABFE is much accurate than a linear regression model. **A4:** The lowest fidelity oracle is critical for performance because it provides the majority of the data for the multi-fidelity model. As stated in Appendix B, we provide the model with an initial dataset of 200,000 ZINC250k compounds and associated oracle outputs at the first fidelity level (linear regression). Thus, removing this fidelity greatly reduces the total data available to the model. Even if this data is significantly less accurate than ABFE, the quantity of data is still important for model performance, and so it is expected that removing this data significantly reduces performance.
Summary: This paper introduces a new approach for generating drug candidates, called MF-LAL. The proposed method utilizes a variational autoencoder with multiple latent spaces arranged hierarchically to accommodate different fidelities. The first level employs a regression model trained to predict activity on known compounds. The second and third levels are based on molecular docking to one or several protein structures, respectively. The final level focuses on absolute binding free energy (ABFE) prediction, which is the most computationally intensive model for estimating binding to the target protein. Compounds are generated using an active learning method with query synthesis. The acquisition function relies on surrogate models trained on data already collected from various fidelities. Initially, molecules of the lowest fidelity are generated until uncertainty falls below a predefined threshold. Subsequently, the model begins generating molecules from the next level of fidelity. This process continues for seven days. The results show that MF-LAL can generate molecules with the best ABFE, outperforming single-fidelity and single-latent-space models. ## update after rebuttal The Authors addressed all my comments. I decided to maintain my positive score. Claims And Evidence: The claims in the paper are supported by experimental results. Methods And Evaluation Criteria: The method is explained clearly, and the selected evaluation metrics for the two biological targets effectively demonstrate the value of the proposed model. However, the choice of the decoder architecture is nonstandard and is compared only with non-autoregressive methods (more details in Questions For Authors). There are no details on how GCN was implemented for generating molecules. Moreover, I am curious about the validity of the generated compounds. The SELFIES representation is used to ensure that the string representation can be decoded to molecules, but rather heavy filtering criteria are applied (QED > 0.4, SA < 4, no rings with at least seven atoms) - what percentage of the generated compounds match these criteria for the tested models? Theoretical Claims: There are no proofs of theoretical claims that need to be checked. Experimental Designs Or Analyses: The experimental design is sound, but a few aspects could be improved. For example, only 15 compounds are generated and evaluated in Tables 1 and 2 for all but two best models. I believe all methods should be assessed based on 40 compounds. It is unclear whether the number of active scaffolds and the results of the top compounds are computed from all 40 compounds or just 15. Furthermore, the initially generated compounds prior to filtering could be assessed in terms of validity, synthetic accessibility, and drug-likeness. Supplementary Material: I read the supplementary material. Relation To Broader Scientific Literature: This paper presents an intriguing proposition for effectively training generative models to produce increasingly useful molecule candidates. A hierarchical approach with multiple latent spaces is proposed to capture representations for each fidelity separately. Stochastic variational Gaussian processes are used as surrogate functions trained on the obtained calculations of binding affinity, making these functions easy to train even for big datasets. MF-LAL can be useful in early drug discovery stages to propose novel hit candidates. Essential References Not Discussed: The key references have been described in the paper. Other Strengths And Weaknesses: All of my comments are described in the other sections. Other Comments Or Suggestions: In Algorithm 1, there is probably a typo in line 7. It should be $\Sigma_{\lambda_k}(z_k)<\gamma_k$. Questions For Authors: 1. How were thresholds -8.2 and -6.8 kcal/mol chosen for the two tested targets? Was it based on reference compounds or balance of the activity classes in training regression model for the lowest fidelity? 2. The choice of the decoder architecture seems nonstandard. Text representations such as SMILES and SELFIES are usually generated in the autoregressive fashion. Have you tried using RNNs? Was the transformer trained also to predict all characters at the same time, or autoregressively? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful feedback and positive comments about the work. > **Q1:** There are no details on how GCN was implemented for generating molecules. **A1:** We will include details of our GCN implementation in the Appendix in the updated draft. Briefly, we used a three-layer graph convolutional network with one-hot encoded atom types for the encoder, and an inner product decoder as described in Kipf and Welling 2016. > **Q2:** The SELFIES representation is used to ensure that the string representation can be decoded to molecules, but rather heavy filtering criteria are applied (QED > 0.4, SA < 4, no rings with at least seven atoms) - what percentage of the generated compounds match these criteria for the tested models? **A2:** Among all compounds generated after training with active learning, 55% for BRD4(2) and 68% for c-MET fulfilled the filtering criteria. We find these percentages high enough such that there is no need to do multi-objective optimization, especially because generation is very fast so discarding 25-50% of the generated molecules is not problematic. We will include these numbers in the updated draft. > **Q3:** It is unclear whether the number of active scaffolds and the results of the top compounds are computed from all 40 compounds or just 15. **A3:** The number of active scaffolds and the top 3 compounds in Table 1 are computed from all 40 compounds (for MF-LAL and the top baselines). Based on this and other comments from reviewers, we will make two separate tables in the updated draft reporting results on both 15 and 40 compounds for clarity. See the response to Reviewer QauA for the MF-LAL results on only 15 compounds, including the top compound scores and the number of active scaffolds. > **Q4:** the initially generated compounds prior to filtering could be assessed in terms of validity, synthetic accessibility, and drug-likeness. **A4:** Prior to filtering, the mean SAscore of generated compounds is 3.9 for BRD4(2) and 3.6 for c-MET. The mean QED is for 0.48 BRD4(2) and 0.50 for c-MET. The validity is 100% because we used SELFIES strings, which are guaranteed to be valid. We will include these numbers in the updated draft. > **Q5:** How were thresholds -8.2 and -6.8 kcal/mol chosen for the two tested targets? Was it based on reference compounds or balance of the activity classes in training regression model for the lowest fidelity? **A5:** The thresholds were chosen based on reference compounds from previous works that investigate BRD4(2) and c-MET (see lines 368-373, left side). We roughly picked these thresholds based on the typical experimental affinities of the best binders analyzed in these works. For BRD4(2), Liu et al. 2017 (see paper for citation) explore various BRD4(2) inhibitors and generally consider compounds active when they have submicromolar (<1 $\mu$M) activity, which is the cutoff we used in our paper. For c-MET, Naguib et al. 2024 states that “potent activity” is achieved with a 12 $\mu$M inhibitor, so we set our activity cutoff to <10 $\mu$M. We will include these details in the updated draft. > **Q6:** The choice of the decoder architecture seems nonstandard. Text representations such as SMILES and SELFIES are usually generated in the autoregressive fashion. Have you tried using RNNs? Was the transformer trained also to predict all characters at the same time, or autoregressively? **A6:** Both Transformers and RNNs have indeed been used in such applications. We chose to only test the Transformer architecture, however, because it has demonstrated very similar or slightly superior performance relative to RNNs on molecular generation tasks [1, 2]. Our Transformer was trained to predict characters autoregressively using the standard Transformer decoder architecture. [1] Chen et al.”Molecular language models: RNNs or transformer?” Briefings in Functional Genomics 2023. [2] Xu et al. “REINVENT-Transformer: Molecular De Novo Design through Transformer-based Reinforcement Learning.” arXiv 2024.
Summary: This paper introduces, MF-LAL, a generative algorithm for drug discovery based on biological activity rather than docking. Rather than conditioning on molecular docking, the authors propose a pipeline to generate molecules based on molecular dynamics-based binding free energy. As MD-based free energy calculations are prohibitively expensive, MF-LAL uses multiple oracles at varying fidelity levels to z. The authors also present a sample of efficient training to minimize the high-fidelity data required. The active learning-based method generates molecules based on the acquisition function over the hierarchical latent space and expands the dataset. Optimized molecules are then generated using gradient-based optimizations to find extrema in the latent space at some fidelity. ## Update After Rebuttal I have read the rebuttal and decided to keep my score. Claims And Evidence: * A key component of the active learning method present is the threshold calculation in algorithm 2 line 7 $\sigma_{\lambda_k}(\cdot)$ must be well calibrated in order for the active learning to work. * The reconstruction accuracy of the decoder is quite small and not very robust Methods And Evaluation Criteria: * The training details including the supplement are sparse and disjointed. It is not easy to understand how exactly the model is trained. I know space is limited, but even an algorithm in the supplement in addition to the loss function would help, especially for * The hierarchical latent space is interesting and optimizing at a single fidelity updates the latent space in all fidelities * One concern would be that such an optimization causes adverse effects on the fidelity not being trained. Theoretical Claims: N/A Experimental Designs Or Analyses: - The usual generation quality measures such as validity, diversity, and synthesizability are not presented - Since all the baselines don’t have statistically significant results, it is a bit concerning to compare the results Supplementary Material: Section A, B, and C. Relation To Broader Scientific Literature: The authors tackle a very important problem of sample efficient training of surrogate modeling. It is usually prohibitively costly to acquire the amount of data usually needed for deep learning models. So a method that maximizes sample efficiency is hugely important in the specific field of drug discovery but also beyond to other scientific domains. Essential References Not Discussed: N/A Other Strengths And Weaknesses: * The retrieval component of the pipeline is very similar to fuzzy search algorithms and semantic search algorithms and is not particularly novel. * Substructure similarity is measured with Tanimoto distance of Morgan fingerprints which does not capture 3D information. * The authors have many baselines and discuss potential drawbacks due to the lack of experimental samples Other Comments Or Suggestions: * Line 114 Col 2: querying oracle k –> querying oracle f_k * Line 209 Col 2: We use the posterior variance of the GP surrogate $\sigma_{\lambda_k}(\cdot)$ –> We use the posterior variance, $\sigma_{\lambda_k}(\cdot)$, of the FP surrogate $\hat{f}_k$ Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful feedback and positive comments about the work. > **Q1:** The training details including the supplement are sparse and disjointed. It is not easy to understand how exactly the model is trained. I know space is limited, but even an algorithm in the supplement in addition to the loss function would help **A1:** Thank you for the suggestion, we will consolidate all training details into a single section and add a training algorithm to the Appendix in the updated draft. > **Q2:** One concern would be that such an optimization causes adverse effects on the fidelity not being trained. **A2:** The loss function we use while training considers molecules from all fidelity levels 1...K, regardless of the current level k. In other words, while k dictates which fidelity data we will add to the dataset, the entire dataset is used to train the model regardless of k. This ensures that training at level k does not degrade performance at the other fidelity levels. Analyzing the reconstruction accuracy, and the accuracy of the surrogate models, confirms that they retain their performance when other fidelities are being trained. > **Q3:** The usual generation quality measures such as validity, diversity, and synthesizability are not presented **A3:** Among the generated compounds for both targets following filtering, the diversity (1 - mean pairwise Tanimoto similarity) is 0.81 for BRD4(2) and 0.83 for c-MET. The synthesizability (mean SAscore) is 3.6 for BRD4(2) and 3.5 for c-MET. The drug-likeness (mean QED) is 0.59 for BRD4(2) and 0.63 for c-MET. The validity is 100% for both targets, because we use SELFIES strings that guarantee validity. These metrics are in the range of typical drug compounds, so we consider them satisfactory. We will include these numbers in the updated draft. > **Q4:** Since all the baselines don’t have statistically significant results, it is a bit concerning to compare the results **A4:** As reported in Section 4.3, compounds generated by MF-LAL had better ABFE scores than all baseline methods at a statistically significant level (see Appendix B.3 for statistical details). Specifically, the difference between both the mean ABFE scores of all generated compounds, as well as the scores of the top 3 compounds, is significant between MF-LAL and each baseline for both targets. For the “# of active scaffolds” test, MF-LAL also produced significantly more active scaffolds than other baselines. Since MF-LAL shows statistically significant improvements over baseline methods, we do not think these comparisons are concerning. > **Q5:** Substructure similarity is measured with Tanimoto distance of Morgan fingerprints which does not capture 3D information. **A5:** If the reviewer is referring to the similarity metric we used to compute the number of active scaffolds, we do not think that capturing 3D information is critical in this case. Scaffold similarity, which is what we measure in our paper, is the most commonly used diversity metric by medicinal chemists and strongly relates to the overall structural diversity of a set of compounds [1]. Additionally, measuring the 3D shape diversity is a difficult task, and is not commonly done by practitioners [1]. [1] Galloway et al. “Diversity-oriented synthesis as a tool for the discovery of novel biologically active small molecules.” Nature Communications 2010.
null
null
null
null
null
null
Off-Policy Actor-Critic for Adversarial Observation Robustness: Virtual Alternative Training via Symmetric Policy Evaluation
Accept (poster)
Summary: The paper presents a novel off-policy reinforcement learning approach that addresses adversarial input observations without requiring additional environmental interactions, thus enhancing sample efficiency and avoiding inefficiencies in agent-environment interactions. By reformulating adversarial learning as a soft-constrained optimization problem, the method eliminates mutual dependencies between the agent and adversary. The approach is theoretically supported by the symmetric property of policy evaluation and shows consistent success and strong sample efficiency in evaluations, making it a promising contribution to the field. Claims And Evidence: The claims of this work is supported by both theories and empirical results. Methods And Evaluation Criteria: Yes, I think the proposed method and evaluation are reasonable. However, this paper also mentions multiple of [1], this baseline should be included. #### [1] Reddi, A., T¨ olle, M., Peters, J., Chalvatzaki, G., and D’Eramo, C. Robust adversarial reinforcement learning via bounded rationality curricula. ICLR 2024 Theoretical Claims: I went through the theories and proofs. I did not go into details of the proofs in the appendix, but overall it sounds reasonable. Experimental Designs Or Analyses: Please refer to "Questions for authors" Supplementary Material: The authors provide a lot of details in the ablation studies and implementation in appendix but without code base. Relation To Broader Scientific Literature: * This work opens an alternative way to implicitly learn adversarial training without two-player game. It will be better for contributing to community with the open-source code. Essential References Not Discussed: I think the authors cover most of the references. Other Strengths And Weaknesses: Strengths * well-written * comprehensive evaluation on different attacks * proposed method is impactful and concise * algorithms are supported with theories Weaknesses * In 5.2. Attacker Settings, it is mentioned that the common attack scales are used in previous studies. Please cite those works. * Although, this works briefly shows how to choose the mix in line 283, I still have the concern about the over-optimism/pessimism, which is pointed out from a lots of robust optimization works, e.g., [2]. Further defense or mentioning explicit limitations can enhance this work. #### [2] Juncheng Dong et al. Variational Adversarial Training Towards Policies with Improved Robustness. AISTATS 2025 Other Comments Or Suggestions: Please avoid citing paper in arXiv version if it was accepted. For example, the paper below. #### Reddi, A., T¨ olle, M., Peters, J., Chalvatzaki, G., and D’Eramo, C. Robust adversarial reinforcement learning via bounded rationality curricula. ICLR 2024 Questions For Authors: * How can we tell the most robust sample-efficient method from the values shown in table 1? * I thought that different heuristic attacks is only for evaluation. Then what is the reason that the the most robust sample-efficient method under different attacks are different training methods? * Can you elaborate more why both SAC and VALT-EPS has higher variance only at the end for hopper task in figure 2? * Could you discuss when people should use between VALT-EPS and VALT-SOFT from both theoretical and empirical perspectives? * Table 6 records the computation time among different off-policy methods. In the experiments, SAC-PPO took more training steps from previous tables. Then how will be a fair comparison regarding computation time in table 6? * I have the concern if there increases a lot more hyper-parameters for the two variants of the proposed methods compared with existing robust RL methods. * It seems that each proposed method has been evaluated under different attacks. While during training, it requires hyper-parameter tuning for each method, including baselines, how do you decide what kind of hyper-parameters should be adopted and when the training can be stopped? In other words, there may happen that the learning curves perform better for hyper-parameter A while it may be more robust to some attacks in evaluation for hyper-parameter B, and maybe be more robust to the other attacks in evaluation for hyper-parameter C. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive and detailed feedback. Below, we respond to each point. --- ## Methods and Evaluation Criteria ### About Baseline [1] > [1] Reddi et al., *Robust adversarial reinforcement learning via bounded rationality curricula*, ICLR 2024 Thank you for the suggestion. We considered adapting [1]—originally proposed for dynamics robustness—to the observation robustness setting. However, we found key theoretical mismatches. [1] trains the adversary using a separate replay buffer under stationary dynamics. Under observation perturbations, the effective dynamics become $\mathcal{F} \circ \pi(s' \mid s, \tilde{s})$, leading to inconsistency between the adversary's buffer and the trajectory induced by the current policy, thereby violating convergence assumptions. This reveals a structural gap: [1] assumes dynamics that receive two actions in parallel, whereas our setting involves a cascade structure (perturb, then act). Bridging this gap is an interesting direction, but it is beyond the scope of this work. --- ## Relation to Broader Literature > **Code Release** Thank you. Given the multiple baselines and training procedures, additional time is needed for cleanup and documentation. We are committed to releasing the code with the camera-ready version. --- ## Weaknesses - **[W1] Missing Citations in Section 5.2:** Thank you. We will revise the manuscript to include relevant citations. - **[W2] Mixture Coefficient (line 283):** Our heuristic choice was made for simplicity, but we agree it may cause miscalibration. We will add a discussion and cite works on distributionally robust optimization. --- ## Other Comments - **arXiv vs. Published Citations:** We will update arXiv references to their published versions where applicable. --- ## Questions for Authors - **[Q1] Identifying Robust and Sample-Efficient Methods** In Table 1, the highest scores within each on-policy or off-policy category are highlighted in bold, and the best-performing methods among the most sample-efficient are shaded in gray. Our methods consistently maintain strong robustness, especially in complex tasks (HalfCheetah, Ant), emphasizing the value of MDP-aware modeling. Following Reviewer AMCX's suggestion, we will use worst-case metrics to better capture robustness. - **[Q2] Why Robust Methods Vary Across Attacks** We attribute this to equilibrium differences across training methods. Robust RL does not guarantee global optimality, and policies may settle into different robustness profiles depending on the training dynamics. - **[Q3] Variance in Hopper** This is addressed in our response to Reviewer wL61. - **[Q4] When to Use VALT-EPS vs. VALT-SOFT** Two key factors: 1. **Computation**: VALT-EPS uses PGD-based attacks across two networks, requiring GPU acceleration (line 1689). 2. **Task Sensitivity**: In failure-sensitive environments, VALT-SOFT better explores around risky observations. In tolerant domains like HalfCheetah, VALT-EPS is more effective. - **[Q5] Computation Time in Table 6** Table 6 is intended as reference, not comparison. We agree that step count affects time. In the revision, we’ll include per-update time and breakdowns to improve fairness. - **[Q6] Hyperparameter Complexity** Our method introduces fewer tunable parameters than ATLA. For example, we avoid learning adversary networks. Most hyperparameters follow SAC; new ones (e.g., mixture rate) are easier to tune. We will include hyperparameter tables for clarity as: https://gofile.io/d/8Eq728 - **[Q7] Hyperparameter Tuning and Early Stopping** Thank you for the question. We set the number of training steps based on when the policies achieve sufficiently high scores under non-attacked conditions. While some environments may require fewer steps, we adopted a consistent training schedule to ensure fair comparisons across methods. In off-policy settings, longer training can help refresh the replay buffer with newer data, mitigating distribution shift and potentially improving robustness—albeit at the cost of sample efficiency. However, we did not adopt this strategy in the current work and instead consider addressing distribution shift an important direction for future research. Tuning hyperparameters across multiple attacks is challenging. During training, we apply simple heuristic attacks to identify promising candidates, and later refine them under various attacks. While trade-offs exist, our experiments indicate that robustness tends to be more influenced by the choice of method than by hyperparameter settings. We view robust hyperparameter tuning under various attacks as an important and open challenge for future work. --- We hope our responses help clarify your concerns, or will be addressed in the revision. If you find the direction and potential contributions meaningful—especially as a baseline for off-policy adversarial RL—we would sincerely appreciate your consideration in revisiting the score.
Summary: This paper proposes a method to address the observation robustness, which does not rely on interacting with the environment and making the algorithm off-policy. Claims And Evidence: **General:** By looking at the formulation in sec. 3 and related work, it seems this work aims only at state adversarial robustness. Then the scope should be explicitly written early in the title/abstract/introduction. **Introduction:** Why is ATLA framework mentioned that frequently? It seems less novel to me. As pointed out by the authors, ATLA framework by Zhang et al. and Sun et al. is done in 2021. But then, the authors claim this framework is adopted by Pinto 2017. It is not sensible to claim a later-proposed framework is adopted by an algorithm proposed 4 years ago. RARL (Pinto et al.) directly use such adversarial training framework (similar to GAN as well) without theory, thus this so-called ATLA framework, is just normal adversarial training protocol. Back to this work, the authors should not introduce such method by citing Zhang et al. but better citing RARL/GAN first and then, say sth like formally concluded by Zhang et al. "The mutual dependency between the victim agent and the adversary doubles the required sample size for training, leading to inefficiencies and increased computational costs." Not sure about this, isn't every work studying adversarial robust needs such additional cost? Methods And Evaluation Criteria: The graphical demonstration in Fig 1 is clear and nice-looking! Theoretical Claims: NA Experimental Designs Or Analyses: The experiments are comprehensive, but there are concerns. The first one is Fig 2, where the caption is not clearly demonstrating the experimental tasks are under attack or not, and it is under which attack. Besides, SAC is significantly better than others, which might also be a concern. The issue of Table 1 is that it is hard to conclude the proposed method is superior or not. By looking at the average score, the proposed method only works in Halfcheetah, but this way of concluding is definitely biased. Considering the defense effect against all attacks are important. My proposal will be, to add an additional column/plot of worst-case performance under these attacks. I think it's a way better metric to conclude the defense effect than average performance. Supplementary Material: NA Relation To Broader Scientific Literature: The paper is based on a general adversarial training framework for robustness, namely VALT [Zhang et al, 2021]. In this work, the authors remove the interactions with environment by providing analytical solution to the optimal adversary. Essential References Not Discussed: The main issue is related work seems only include "Adversarial Attack and Defense on State Observations." There are lots of other type of attack such as action attack: Pinto, Lerrel, et al. "Robust adversarial reinforcement learning." *International conference on machine learning*. PMLR, 2017. Tessler, Chen, Yonathan Efroni, and Shie Mannor. "Action robust reinforcement learning and applications in continuous control." *International Conference on Machine Learning*. PMLR, 2019. The minor one is that authors claim about the novelty of leveraging off-policy notion to address robustness. But off-policy seems often required in offline robustness problems. Would be good to include some of them in the related work such as but not limited to: Panaganti, Kishan, et al. "Robust reinforcement learning using offline data." *Advances in neural information processing systems* 35 (2022): 32211-32224. Rigter, Marc, Bruno Lacerda, and Nick Hawes. "Rambo-rl: Robust adversarial model-based offline reinforcement learning." *Advances in neural information processing systems* 35 (2022): 16082-16097. Tang, Xiaohang, et al. "Adversarially Robust Decision Transformer." *arXiv preprint arXiv:2407.18414* (2024). Other Strengths And Weaknesses: Paper is well-written. However, it is hard to obtain conclusion from the experiments. Need better metric. Other Comments Or Suggestions: For now, I will offer a weak reject. But will consider increasing the score if the concerns and questions are addressed, especially the experiments, specifically the results are hard to draw conclusion. Typos: Appendix C: basical procedure Bottom of page 21: pratical procedure Algorithm 1: minimiging Questions For Authors: The paper tries to motivate by making the algorithm for robustness off-policy. Will this be beneficial if you are doing online RL? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and detailed feedback. Below, we respond to each of the concerns and suggestions. --- ## Claims and Evidence ### **Scope Clarification (Title/Abstract/Introduction)** Thank you for pointing this out. We agree that our work specifically addresses **adversarial robustness in state observations**, and we will explicitly clarify this scope in the **title**, **abstract**, and **introduction** of the revised manuscript. In particular, we plan to revise the title to: **"Off-Policy Actor-Critic for Adversarial Observation Robustness: ..."** to more accurately reflect the focus of our study. --- ### **ATLA in Introduction** Thank you for your comment regarding the historical positioning of the ATLA framework. Our intention in referring to ATLA was to highlight its influence in the **observation robustness** setting, where it has seen wide empirical use in this domain. We did not mean to suggest that ATLA introduced adversarial training prior to earlier works such as **GANs** or **RARL (Pinto et al., 2017)**. We sincerely apologize for any confusion this may have caused. In the revised manuscript, we will: - Properly cite **GANs** and **RARL** as foundational works in adversarial training. - Position **ATLA** as a more recent **formalization** of this framework, specifically within observation perturbation settings. --- ### **On Sample Inefficiency of Adversarial Training** > "The mutual dependency between the victim agent and the adversary doubles the required sample size for training..." We appreciate this concern. We agree that not all robust RL approaches incur this cost—especially **offline** or **model-based** methods, as noted. Our original statement specifically referred to **model-free online RL**, where the agent and adversary must interact with the environment during training. We will revise the manuscript to clarify this scope and explicitly note the exceptions. --- ## Experimental Clarity ### **Figure 2 Clarification** Thank you for the helpful comment. Due to space constraints, the caption of Figure 2 lacked sufficient detail. We will revise it to explicitly state that the evaluations are conducted under nominal (non-adversarial) conditions, as also mentioned in the main text. To further avoid confusion, we will update the caption to direct readers to Table 1 for robustness metrics under attack. --- ### **Table 1 Interpretation** Thank you for the valuable suggestion. We agree that average performance under attacks may obscure important aspects of robustness. In the revision, we will use the worst-case scores instead of average scores. Please see the revised table at the following anonymous link: https://gofile.io/d/G0l2mW Along with the revised table, we will clarify that VALT-EPS and VALT-SOFT consistently maintain high robustness across all tasks. For complex tasks such as HalfCheetah and Ant, incorporating MDP-level considerations—as in our methods—is essential for high robustness, especially off-policy. SAC-based methods generally outperform PPO-based methods on Hopper. While PPO variants achieve the highest score in Ant, they require over three times more environment interactions, highlighting our methods' potential for robust RL with significantly improved sample efficiency. Thank you again for your insightful feedback. --- ## Essential References Not Discussed We greatly appreciate the additional references. In the revised manuscript, we will address topics that were not sufficiently covered, such as **attacks on actions and rewards**. We also recognize that **offline robust RL** has become increasingly important. We plan to incorporate the literature you suggested, along with recent developments in this area. Due to space constraints, this discussion may be included in **Appendix A**. --- ## Other Comments or Suggestions ### **Typos** Thank you for pointing these out. We will correct them in the revised manuscript. --- ## Questions for Authors > **Does your off-policy approach provide benefits in online RL?** Thank you for this insightful question. Yes, we believe our off-policy approach can offer significant benefits in **online RL settings**. In real-world applications such as **robotic learning**, improved sample efficiency leads to **reduced interaction time with the physical environment**, which lowers the need for human monitoring and enables **faster, safer deployment**. That said, if the reviewer is referring to **non-stationary or rapidly changing environments**, we agree that **on-policy methods** or **adaptive formulations** may be more appropriate. Exploring hybrid or continual learning strategies is a promising direction for future work. --- Thank you again for your constructive and thoughtful feedback.
Summary: This paper proposed an off-policy VALT framework for SA-MDP based on Symmetric Property and Soft Optimization. Compared with the existing ATLA framework, this framework improves sample utilization efficiency as it does not require additional training for the Adversary. Claims And Evidence: + In Line 50, it is mentioned that "there is currently no off-policy actor-critic method". Then, what are the advantages of adopting the off-policy approach in SA-MDPs? In the more sensitive setting of state adversarial, off-policy seems to cause severe impacts due to distribution shift (the authors seem to have also mentioned this issue in Line 285). Is such a sacrifice worthwhile for the sake of the advantages of off-policy? + What does "enhance generalization" mentioned in Line 67 refer to? Methods And Evaluation Criteria: + How is the update criterion for policy improvement in Proposition 4.11 obtained? What are the differences between the SAC-PPO method mentioned in the appendix and the series of PPO methods in the baseline? Why is this additional algorithm introduced? Theoretical Claims: + How does the general $f$-divergence term relax Equation (3) with constraint (1)? What is the gap between the final Nash equilibrium obtained after relaxation and the Nash equilibrium condition (2)? + It seems that the derivations or proofs of Proposition 4.4 and Proposition 4.5 are missing. Is the $Q$ here the same as the $Q$ in Section 3.1? How are $\mathcal{H}$ and $D_f$ introduced? Experimental Designs Or Analyses: + Why does the phenomenon of a sudden increase in standard deviation occur in the VALT-EPS algorithm in Hopper in Figure 2(b)? Do the authors have any understanding of this? Supplementary Material: + In Appendix C Algorithm 1, the update of $\theta_{1,2}$ in line 8 seems to be redundant with the update in line 12. + The results in Figure 4 in Appendix E.2 seem to be somewhat inconsistent with those in Table 4 (it seems that the optimal solution in Figure 4(a) is not VALT-EPS-SAC + reg.). In addition, in Table 4, the methods without PE for both seem to be consistently better than the full set. What are the authors' insights into this? + The training time of other Baselines is ignored in Appendix F. What are the training times of them? Relation To Broader Scientific Literature: + The authors mention in Line 59 that "the proposed robust methods were not robust enough against stronger attacks". Are there any practical examples of such SA-MDPs with "stronger attacks"? Providing such examples would help readers better understand the motivation of this paper. Essential References Not Discussed: There are no necessary references known that this paper fails to discuss. Other Strengths And Weaknesses: ## Strengths + The Symmetric Property proposed in this paper effectively reduces the amount of environmental interaction required by the existing SA-MDPs framework. + The VALT framework proposed in this paper fills the gap of the missing off-policy algorithms for solving SA-MDPs. ## Weaknesses + Due to the lack of derivations or proofs for Proposition 4.4 and Proposition 4.5, I am not entirely certain about the correctness of the Symmetric Property proposed in this paper. + I'm not quite sure whether there are SA-MDPs environments in practice that are highly aggressive, making robust methods insufficiently robust. Other Comments Or Suggestions: For the main comments and suggestions, please refer to the previous sections. Additionally, there are no further comments or suggestions. Questions For Authors: Refer to the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your review and interest in our work. Below, we address your main concerns. We also acknowledge that some notational details may have been omitted due to space limitations. --- ## Claims and Evidence ### **Motivation and Advantage of this work** We believe investigating off-policy adversarial learning in SA-MDPs is valuable due to its sample efficiency and performance across diverse tasks. This aligns with the broader trend in RL since the 2010s, where off-policy methods like DQN and DDPG are widely adopted despite distribution shift. While off-policy methods do suffer from distribution mismatch, their efficiency has justified continued use—alongside many efforts to mitigate such issues. We apply the same reasoning in the adversarial setting. VALT inherits distribution shift challenges like any off-policy method, but provides clear theoretical guarantees and achieves strong empirical results—surpassing Robust-SAC in robustness on HalfCheetah and Ant (especially under learned attacks such as SA-RL and PA-AD) while maintaining sample efficiency. We emphasize that our method grounds robustness within the MDP framework rather than relying on local smoothness. Our formulation supports both theoretical clarity and practical effectiveness, aligning with ICML’s emphasis on principled contributions. ### **What does "enhance generalization" refer to** “enhance generalization” refers to the agent's improved robustness under previously unseen or perturbed observations. --- ## Methods and Evaluation Criteria ### **Update criterion (Proposition 4.11):** We consider a fixed adversary and maximize the value function in Eq. (7). Since $D_f$ does not depend on the policy, it is omitted in the optimization: $\mathbb{E}\_{\nu} [ \mathbb{E}_{\pi}[Q] + \mathcal{H}(\pi) ]$, whose analytical solution (as in SAC) is $\pi^{\star}\_{\text{old}}$ in Eq. (13). To approximate $argmax\_{\pi} V^{\pi}\_{\nu^{\text{soft}}}(s_t)$, we minimize the KL divergence between $\pi$ and $\pi^{\star}_{\text{old}}$, using the joint distribution $\pi \circ \nu$. This serves as a soft policy improvement criterion. ### **SAC-PPO vs PPO variants:** SAC-PPO uses SAC as the agent, while the PPO variants use PPO. We introduce SAC-PPO to provide a fair min-max adversarial baseline for SAC agents, as such comparisons have not been explored in prior work. --- ## Theoretical Claims ### **Relaxation via $f$-divergence:** The $f$-divergence allows optimization over a broader perturbation set $\mathcal{N}$, while softly constraining the adversary to stay near a prior distribution $p$. Under Assumption 4.2, the choice of $p$ implicitly bounds the effective perturbations. ### **Gap to Nash Equilibrium (Eq. 2):** As $\alpha_{\text{attk}} \to 0$, the relaxed soft max-min game converges to the exact Nash equilibrium. Similar to entropy regularization in PPO, a small positive $\alpha_{\text{attk}}$ often improves learning stability in practice. ### **Missing proofs (Propositions 4.4 & 4.5):** Due to space constraints, we omitted full derivations. Similar to SAC, we define a modified reward: $\hat{r} = r + \mathbb{E}_{\mathcal{F}}[\mathcal{H} + D_f]$, which leads to the soft value update rule under an assumed soft-worst adversary at the next state, yielding Eq. (7) and (8). Here, the $Q$-function differs from that in Section 3.1 as it incorporates the adversary’s influence. --- ## Experimental Analysis ### **Variance in Hopper (Fig. 2b):** This is due to one unstable seed among the eight used. Similar fluctuations are reported in SAC's paper(Fig. 1 in [1]) for Hopper. As noted in line 1316, we apply adjustments, but Hopper's sensitivity makes some variance unavoidable. [1] Haarnoja et al., "Soft Actor-Critic Algorithms and Applications", arXiv 2018. --- ## Supplementary Material 1. Line 8 updates the critic; Line 12 updates the target network. 2. Learning curves show mean scores over 8 seeds; tables report median seed performance. Lower-scoring outliers affect averages, hence the difference. In the Ant ablation, the lack of regularization in VALT-SOFT likely leads to a weak agent policy, which in turn results in an inconsistent adversary. In such cases, excluding adversary effects from value estimation (w/oPE) may prevent overly pessimistic updates and yield better training performance. However, we observe that under regularized conditions (+reg.), w/oPE fails to provide correct policy evaluation and leads to lower robustness. We plan to include this analysis in the revised manuscript. 3. PPO’s training times match those in [2]; we will add our timing results in the revised version. [2] Liang et al., “Efficient Adversarial Training without Attacking”, NeurIPS 2022. --- ## Broader Scientific Context and Weaknesses - Concerns on theoretical gaps are addressed in the Theoretical Claims section. - The motivation and theoretical justification for off-policy SA-MDPs are detailed above (Claims and Evidence).
Summary: The paper "Robust Off-Policy Actor-Critic: Virtual Alternative Training via Symmetric Policy Evaluation" addresses the challenge of training reinforcement learning (RL) agents that are robust to adversarial perturbations in their input observations. Existing methods often rely on alternating training between the agent and an explicitly learned adversary, which can be sample-inefficient and difficult to integrate with off-policy algorithms. This paper proposes a novel off-policy framework called Virtual ALternative Training (VALT). The key idea of VALT is to reformulate adversarial learning as a soft-constrained optimization problem that eliminates the need for additional environmental interactions to train the adversary. Instead, the adversary's policy and value function are implicitly derived by leveraging the agent's value estimation and a symmetric property of policy evaluation between the agent and the adversary. The paper presents a way to construct an alternative adversarial training framework without explicitly learning an RL policy for the adversary. This is achieved by exploiting the symmetry in policy evaluation. The authors present two concrete algorithms based on the Soft Actor-Critic (SAC) that implement the VALT framework. These algorithms demonstrate both sample efficiency and robustness against various adversarial attacks. VALT-EPS-SAC uses an epsilon-worst case approach to approximate the adversary and VALT-SOFT-SAC employs a parameterized policy network to model the (soft) optimal adversary. The paper presents experiments on challenging MuJoCo continuous control tasks (HalfCheetah, Hopper, Walker2d, and Ant) show that the proposed VALT-based algorithms achieve significantly better sample efficiency compared to on-policy adversarial training methods (like ATLA-PPO) and demonstrate superior robustness against a wide range of heuristic and learning-based adversaries, often outperforming existing robust off-policy baselines (like Robust-SAC). Claims And Evidence: The first claim is that it is theoretically possible to construct an adversarial framework without requiring an explicit RL process for the adversary. This is achieved by a theoretical proof showing that the adversary's optimal value function has a simple relationship to the agents optimal value function. The second claim is that this new framework is viable and robust experimentally. This is supported by empirical results in figure 2 and table 1. Experiments are run on some OpenAi gym control problems. Methods And Evaluation Criteria: The method is tested on OpenAI gym which has relatively simple state representation. The observations are visually very simple which somewhat limits the potential impact. Mentioning computational cost is good, but a more thorough analysis of wall-clock time or FLOPs compared to baselines would be valuable, especially considering VALT claims to improve sample efficiency without increased computation. Theoretical Claims: I only had a look at the proof of theorem 4.6 because it is the most important piece. It seems correct. Experimental Designs Or Analyses: I focused on the main results from sec. 5. The main results seem sound and valid. The authors compare against a wide range of both on-policy and off-policy methods, including state-of-the-art robust RL algorithms, allows for a strong assessment of VALT's relative performance. They evaluated robustness against various attackers: * Heuristic Attacks: Random (Uniform), Max-ActionDiff (MAD), PGD (minQ for SAC and minV for PPO), and RobustSarsa (RS). This provides a range of attack strategies, from simple random noise to gradient-based and more sophisticated attacks. * Learning-Based Adversary Attacks: SA-RL (PPO and SAC) and PA-AD (PPO and SAC). These are more challenging attackers, as they are learned RL agents themselves. This provides a good picture of the performance of performance. Supplementary Material: Just b1 for the proof to 4.6. Relation To Broader Scientific Literature: I think it's novel enough as a contribution to the robust RL literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: * The paper acknowledges that VALT currently lacks support for off-policy algorithms in discrete action environments. The experimental evaluation is therefore limited to continuous action domains. Showing results in discrete action domains, or at least discussing potential adaptations for discrete action spaces more thoroughly, would be beneficial for the broader applicability of VALT. * Mentioning computational cost is good, but a more thorough analysis of wall-clock time or FLOPs compared to baselines would be valuable, especially considering VALT claims to improve sample efficiency without increased computation. * In some of the experiments without any perturbation e.g. half cheetah the proposed method gets much worse results than SAC. This is not desirable in general. Other Comments Or Suggestions: n/a Questions For Authors: * Have you tried other more visually challenging environments. do the results still hold ? * why do you think the results without noise are so much worse on half cheetah for VALT than the SAC baseline. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We appreciate your detailed review and insightful suggestions.** We understand that your main concerns center around: (1) adaptation to discrete action domains, (2) analysis of computational cost, and (3) performance degradation of VALT on HalfCheetah in the absence of noise. Below, we address each point in detail. --- ### (1) Adaptation to Discrete Action Domains and Visually Challenging Environments > _"The paper acknowledges that VALT currently lacks support for off-policy algorithms in discrete action environments... Have you tried other more visually challenging environments?"_ Thank you for highlighting this important direction. We agree that both discrete action settings and visually challenging observations are valuable for broadening the applicability of robust RL methods. Although we have not yet explored this direction, we plan to further develop our current approach to pursue significant improvements in this area. As briefly noted in line 1441, we observe a conceptual connection between VALT-EPS and WocaR. In the case of discrete action spaces, we believe that a discrete variant of VALT-EPS would resemble WocaR-DQN. Specifically, our variant would rely on a **single soft-worst Q-function** (compared to the two used in WocaR-DQN: vanilla and worst), updated using **both uniformly random attacked Q-values and PGD/convex-relaxed worst-case Q-values**. While this variation offers interesting theoretical properties—such as **contraction** and **policy improvement**—we chose not to further discuss it due to the lack of supporting experiments at this stage. However, if the reviewer finds it appropriate, we would be happy to include this theoretical discussion and point to it. That said, even without experiments on discrete action tasks with visual inputs, we believe the contributions of this work are significant enough to justify publication: - A demonstration that adversarial robustness can be achieved without explicitly training an adversary. - Theoretical guarantees (e.g., contraction, policy improvement under fixed adversary assumptions). - The first extensive benchmark for off-policy robust RL with SAC variants, evaluated under a PPO-compatible framework. - We are committed to releasing our code and evaluation environments to support reproducibility and facilitate future research on observation-robust off-policy methods. --- ### (2) Computational Cost Analysis > _"Mentioning computational cost is good, but a more thorough analysis of wall-clock time or FLOPs compared to baselines would be valuable."_ We agree and appreciate this suggestion. While we report wall-clock time in Appendix F, we acknowledge that a more detailed breakdown (e.g., time per processes or FLOPs per step) would be valuable. We plan to include this analysis in a future revision and will provide code and scripts (currently being refactored and documented) that enable monitoring of computation times to support full reproducibility. --- ### (3) Performance on HalfCheetah without Noise > _"Why do you think the results without noise are so much worse on HalfCheetah for VALT than the SAC baseline?"_ This is a great point. As noted in prior robust RL studies (e.g., [1]), there exists a fundamental **trade-off between clean-environment performance and robustness** under adversarial perturbations. This trade-off applies not only to VALT but also to methods like Robust-SAC (see line 1359), where reducing regularization restores SAC-like performance (\~10,000) but sacrifices robustness. For VALT, similar trade-offs are governed by: - The adversarial ratio in the behavior policy (e.g., assuming no attack vs. assuming adversarial conditions), and - The strength of the soft constraint coefficient (e.g., considering near-random vs. near-worst-case attacks). The effect of the adversarial ratio in the behavior policy is illustrated in Table 5 and Figure 5. Although WocaR-PPO [1] also reports such a trade-off, it appears less drastic due to PPO's lower clean performance (\~5,000-6,000), whereas SAC achieves higher scores(\~8,000-10,000), thus making the drop more visually noticeable. [1] Liang, Y., Sun, Y., Zheng, R., and Huang, F. Efficient adversarial training without attacking: Worst-case-aware robust reinforcement learning. *Advances in Neural Information Processing Systems*, 35:22547–22561, 2022. --- Once again, thank you for your constructive feedback. We believe your suggestions will help us further improve the clarity, applicability, and impact of our work.
null
null
null
null
null
null
AlphaQCM: Alpha Discovery in Finance with Distributional Reinforcement Learning
Accept (poster)
Summary: The paper introduces AlphaQCM, a novel reinforcement learning method for discovering formulaic alphas in finance. It conceptualizes alpha discovery as a non-stationary and reward-sparse Markov decision process and addresses challenges through a Q-learning framework combined with quantile-based variance estimation. The Q function and quantile networks, AlphaQCM navigates large search spaces efficiently, outperforming existing methods like AlphaGen in empirical tests on financial datasets. The results demonstrate AlphaQCM's superiority, particularly in complex market environments, providing a more effective and interpretable approach to quantitative finance. Claims And Evidence: The claims in the submission are generally supported by empirical evidence, particularly in demonstrating AlphaQCM’s superiority over baseline methods through Information Coefficient (IC) comparisons. The claim that AlphaQCM outperforms AlphaGen is supported by results but lacks an in-depth analysis of AlphaGen’s failure cases. Additionally, the assertion that AlphaQCM generalizes well to complex datasets is supported by tests on Chinese stock markets but would benefit from validation across different financial environments. Strengthening these areas with sensitivity analysis, robustness checks, and broader market applications would make the claims more convincing. Methods And Evaluation Criteria: The methods and evaluation criteria make sense for the problem of formulaic alphas in finance, as use of reinforcement learning (RL). The benchmark datasets are relevant but limited to Chinese markets, raising concerns about generalizability, testing on other global financial datasets would strengthen the findings. While the paper addresses reward sparsity using distributional RL, alternative methods such as intrinsic motivation or curriculum learning could have been explored. The choice of an LSTM-based architecture is reasonable for sequence modeling, but newer architectures like MAMBA could offer efficiency and scalability advantages. The paper overlooks the potential of large language models (LLMs) for formula discovery. A dedicated formula generator leveraging LLMs could provide a strong baseline or hybrid approach. Finally, the dataset only extends to 2022, missing recent financial data from 2023 and 2024, which is crucial for validating the continued effectiveness of AlphaQCM in evolving market conditions. Theoretical Claims: No major inconsistencies shown in the theoretical claims and in the formulas presented Experimental Designs Or Analyses: The paper presents a well-structured experimental design, comparing AlphaQCM against multiple baseline methods using Information Coefficient (IC) as primary performance metric. The use of datasets provides a reasonable range of financial environments to assess robustness. The reinforcement learning framework, including the Q-network and quantile-based variance estimation, is theoretically sound, and the experiments incorporate ablation studies to isolate the impact of key components like the QCM method. Supplementary Material: The supplementary material shows code to reproduce the experiments and validates the results reported Relation To Broader Scientific Literature: The contributions of the paper align with broader scientific literature on quantitative finance, reinforcement learning, and explainable AI. By addressing the challenge of discovering formulaic alphas through reinforcement learning, the paper builds upon prior work in both genetic programming-based and RL-based alpha discovery methods while introducing improvements for handling non-stationarity and sparse rewards. Its focus on creating explainable formulas is valuable, as many machine learning approaches in finance rely on black-box models that lack interpretability. But, it could benefit from deeper comparisons with alternative explainability-focused methods, such as symbolic regression or LLM-based formula generation, to better position its contributions within the broader literature. Essential References Not Discussed: The paper miss the reference of LLM for formula research for example [1] [1] Shojaee, Parshin, et al. "Llm-sr: Scientific equation discovery via programming with large language models." arXiv preprint arXiv:2404.18400 (2024). Other Strengths And Weaknesses: No additional strengths and weaknesses Other Comments Or Suggestions: No additional comments Questions For Authors: No additional questions for authors Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive feedback and helpful comments. To improve the quality of this paper, we have carefully considered your suggestions and questions. Limited by the max length, our shorten replies are as follows: ## Q1 *The claim that AlphaQCM outperforms AlphaGen is supported by results but lacks an in-depth analysis of AlphaGen’s failure cases.* **A1:** Thanks for your good comment. While the alpha discovery task is conceptualized as an MDP, the AlphaGen employs a native PPO algorithm to find the optimal alpha discovery issue, ignoring the existence of non-stationarity and reward-sparsity in this MDP. This weakness leads to the potential for improvement in the AlphaGen method. By contrast, our AlphaQCM method aims to address the non-stationary and reward-sparse issue via employing the distributional RL algorithm and the QCM method. We hope our answer above clarifies AlphaGen’s failure. In the revised version, we would highlight this in-depth comparison. --- ## Q2 *Additionally, the assertion that AlphaQCM generalizes well to complex datasets is supported by tests on Chinese stock markets but would benefit from validation across different financial environments.* **A2:** We appreciate your valuable comment. In this rebuttal, we have provided additional experimental results using data from the U.S. stock market. Please refer to A2 in the response to Reviewer JbCG. --- ## Q3 *Newer architectures like MAMBA could offer efficiency and scalability advantages.* **A3:** In this rebuttal, we have conducted an ablation study to assess whether the MAMBA block could provide efficiency and scalability advantages for the AlphaQCM method. Specifically, we have replaced the original LSTM module with a MAMBA block, using a 128-dimensional hidden layer and 4 attention heads, while keeping the rest of the network architecture unchanged. Due to time constraints, we are only able to run 3 random seeds for the MAMBA-based AlphaQCM method. The results from the ablation study are shown in the table below. As seen, there is no significant difference between the IC values from the LSTM-based and MAMBA-based AlphaQCM methods. This may be due to the relatively short token sequence (i.e., the formula of alpha), which consists of only the top 20 tokens. Additionally, there is no notable efficiency difference in terms of time, as the majority of the time is spent on agent-environment interaction. |Model|CSI300 Mean| CSI300 Std| CSI500 Mean| CSI500 Std| Market Mean| Market Std| |-|-|-|-|-|-|-| |LSTM|8.49|1.03|9.55|1.16|9.16|1.61| |MAMBA|8.56|1.05|9.32|1.16|9.13|1.63| --- ## Q4 *The paper overlooks the potential of large language models (LLMs) for formula discovery.* **A4:** Apologies for the lack of comparison and discussion regarding the literature on LLM-based formula discovery. One such method is the LLM-SR method [1], which you referenced. We have implemented the official code, but unfortunately, ***the alphas we found had less than 3% out-of-sample IC values on the CSI300 and CSI500 datasets. Note that these discovered by the AlphaGen and AlphaQCM methods have the out-of-sample IC values exceeding 8%.*** However, we do not dismiss the potential of using LLMs for alpha discovery. The performance of the LLM-SR method is highly influenced by prompts. Furthermore, the stock market dataset is far more complex than the physical and biological datasets considered in [1]. **Some in-depth discussion and comparison would be done in the revised manuscript**, but due to time constraints, we leave it for future work. --- ## Q5 *Finally, the dataset only extends to 2022, missing recent financial data from 2023 and 2024.* **A5:** Thank you for your good comment. We have expanded the testing set in the manuscript from the period (2021/01/01 to 2022/12/31) to (2021/01/01 to 2024/12/31), while the training and validation periods are kept. The experimental results are shown in the table below. From this table, we can find that the advantages of our AlphaQCM method remain evident over the baseline methods. However, nearly all methods show a noticeable decrease in out-of-sample IC values. This issue could potentially be addressed by re-fitting the models in these methods whenever new information is available. |Method|CSI300 Mean|CSI300 Std|CSI500 Mean|CSI500 Std|Market Mean|Market Std| |-|-|-|-|-|-|-| |Alpha101|3.02|-|4.11|-|3.78|-| |MLP|1.47|0.26|2.15|0.66|2.04|0.76| |XGBoost|1.80|0.97|3.16|1.11|3.25|1.33| |LightGBM|1.85|0.79|2.32|0.84|2.37|1.06| |GP w/o filter|1.15|1.87|1.04|1.65|0.89|2.02| |GP w/ filter|2.47|2.28|3.54|2.14|0.56|2.57| |AlphaGen|4.13|0.95|4.19|1.39|3.19|1.94| |AlphaQCM|**5.48**|**1.17**|**5.87**|**1.33**|**4.83**|**1.79**| --- ## Due to length constraints, we would be happy to address any remaining questions during the next phase. [1] Shojaee, P., K. Meidani, S. Gupta, A. B. Farimani, and C. K. Reddy (2025). Llm-sr: Scientific equation discovery via programming with large language models. --- Rebuttal Comment 1.1: Comment: Thank you for answering the questions and taking the feedback into consideration. I would like to keep my original score.
Summary: The authors propose a method based on distributional reinforcement learning and QCM to learn a good set of well-formed features (i.e. alphas) for stock market prediction. The proposed method is compared with existing baselines and is shown to outperform these baselines on 3 stock market datasets. ## Update after rebuttal I maintain my score, thank you. Claims And Evidence: The main claim of the authors is on the contribution of an exploration bonus based on QCM to address the issues of reward-sparsity and non-stationarity in the formulated MDP tasks. Empirically, the authors show clear improvement in the test performance when the proposed exploration bonus is employed. Methods And Evaluation Criteria: The exploration-exploitation problem is a critical issue in RL and the use of exploration bonus is a well-known method to address the issue. The proposed method exploits distributional RL and QCM to obtain such exploration bonus. While not entirely novel, I believe it does make valuable contribution to the RL literature. Theoretical Claims: There is not much theoretical result in this work except the consistency of the moment estimator under suitable assumptions. It is unclear to me whether the assumptions make sense in the task setup used in this work and I appreciate it if the authors could clarify this. Experimental Designs Or Analyses: The experimental design is sensible and the chosen set of baselines seems adequate. The only weakness here is that the evaluation is only restricted to essentially one dataset (and its subsets). Supplementary Material: I did not. Relation To Broader Scientific Literature: See "Methods And Evaluation Criteria" above. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the positive review and very helpful feedback. Please let us know whether the points below are sufficient clarification for your concerns. --- ## Q1 from "Theoretical Claims": *There is not much theoretical result in this work except the consistency of the moment estimator under suitable assumptions. It is unclear to me whether the assumptions make sense in the task setup used in this work and I appreciate it if the authors could clarify this.* **A1:** Certainly. We are pleased to clarify the meaning and importance of these two assumptions. Recall the two assumptions as follows: **Assumption D.1:** $\Phi' \Phi \text{ is positive definite;}$ **Assumption D.2:** $\frac{\Phi' \varepsilon(x, a)}{K} \xrightarrow{p} 0 \text{ as } K \to \infty.$ Assumption D.1 is necessary for the existence of least squares estimator in the linear regression model (4) outlined in the manuscript. Verifying and ensuring this assumption are straightforward, as $\Phi$ is a matrix built upon conditional quantiles of the Gaussian distribution. Assumption D.2 requires that the weighted sum of the stochastic residuals in the linear regression model (4) becomes negligible with respect to $K$ (i.e., the number of quantiles). The validity of this assumption is very important to ensure the consistency of least squares estimator. If you are interested in a more detailed proof, please refer to A1 in the response to Reviewer 5Vkr. --- ## Q2 from "Experimental Designs Or Analyses": *The experimental design is sensible and the chosen set of baselines seems adequate. The only weakness here is that the evaluation is only restricted to essentially one dataset (and its subsets).* **A2**: Thank you for your insightful point. In this rebuttal, we have presented additional experimental results using U.S. stock market data. Due to the time constraints for the rebuttal, we are able to include only one stock pool: the largest 500 stocks (S&P 500). Apart from the choice of stocks, all other experimental settings are consistent with those presented in Table 1 of the manuscript. The experiment results are demonstrated in the table below. --- | Method | CSI500 Mean (%) | CSI500 Std (%) | S&P 500 Mean (%) | S&P 500 Std (%) | |----------------|-----------------|----------------|------------------|-----------------| | Alpha101 | 4.38 | - | 3.12 | - | | MLP | 2.72 | 0.65 | 2.61 | 0.49 | | XGBoost | 4.31 | 0.96 | 3.08 | 0.67 | | LightGBM | 4.16 | 0.81 | 3.29 | 0.56 | | GP w/o filter | 1.79 | 1.62 | 1.88 | 1.29 | | GP w/ filter | 4.52 | 1.93 | 4.27 | 1.39 | | AlphaGen | 8.08 | 1.23 | 7.48 | 0.77 | | **AlphaQCM** | **9.55** | **1.16** | **8.46** | **0.89** | --- From this table, we can find that the AlphaQCM method maintains significant advantages over all competitors on both the CSI500 and S&P 500 datasets, presenting robust performance across different market conditions. However, compared to the results of the CSI500 dataset, those of the S&P 500 dataset exhibit a decline in IC performance in most methods. This phenomenon may be attributed to the fact that the U.S. stock market is more efficient and influenced by breaking news, making it harder to capture predictable stock trends based on formulaic alphas. ---
Summary: This paper introduces AlphaQCM, a novel distributional reinforcement learning (DRL) method for discovering synergistic formulaic alphas in finance. The authors conceptualize the alpha discovery process as a non-stationary and reward-sparse Markov Decision Process (MDP) and propose AlphaQCM to address these challenges. AlphaQCM leverages the IQN algorithm to learn quantiles of cumulative discounted rewards and employs the Quantiled Conditional Moment (QCM) method to estimate unbiased variance, even under non-stationarity. This variance is then used as an exploration bonus to guide the agent in navigating the vast search space of formulaic alphas. Empirically, AlphaQCM is shown to outperform competitors, including AlphaGen and GP-based methods, on real-world Chinese stock market datasets, particularly when dealing with larger datasets and more complex financial systems. The main algorithmic idea is the integration of QCM within a distributional RL framework to handle non-stationarity and reward sparsity, leading to more efficient and effective alpha discovery. Claims And Evidence: Yes, the claims made in the submission are generally well supported by clear and convincing evidence. The central claim that AlphaQCM effectively addresses the challenges of non-stationarity and reward sparsity in alpha discovery is substantiated by the empirical results. Specifically, the paper demonstrates through extensive experiments that AlphaQCM consistently achieves superior Information Coefficient (IC) values compared to various baselines across different Chinese stock market datasets (CSI300, CSI500, and Market). The ablation studies further strengthen the claim by isolating the contribution of the QCM method and showing its advantage over using no variance or a vanilla variance estimator. The paper emphasizes the robustness of AlphaQCM and its improved performance in more complex market settings, which is also supported by the increasing performance gap relative to baselines as the stock pool size increases. The evidence presented, particularly in Tables 1, 2, and 3, appears to convincingly support the claim of improved alpha discovery efficacy using AlphaQCM, especially in non-stationary environments. Methods And Evaluation Criteria: The proposed method, AlphaQCM, and the evaluation criteria are generally appropriate for the problem of formulaic alpha discovery in finance. The conceptualization of alpha discovery as an MDP is a reasonable and increasingly adopted approach in this domain. The use of distributional reinforcement learning, and specifically the integration of QCM, is a novel and potentially impactful methodological contribution to address the identified challenges of non-stationarity and reward sparsity. However, a notable limitation of the evaluation is its focus solely on the Chinese A-share stock market. While the Chinese market is a significant and complex market, the generalizability of the findings to other global markets (e.g., US, European, or emerging markets with different market microstructures and regulatory environments) remains an open question. The evaluation criteria, primarily the Information Coefficient (IC), is a standard and accepted metric in quantitative finance for evaluating alpha performance, making it a relevant choice. To strengthen the evaluation, future work could explore the performance of AlphaQCM on datasets from diverse global markets to assess the broader applicability and robustness of the method. Theoretical Claims: Not Applicable. The paper is primarily focused on algorithmic development and empirical validation rather than presenting novel theoretical claims with formal proofs. While Proposition 3.1 regarding the consistency of moment estimators is mentioned, the paper does not delve deeply into formal proofs or extensive theoretical analysis. The strength of the paper lies in its methodological innovation and empirical demonstration of performance. Therefore, the absence of extensive theoretical claims is not a weakness in this context. Experimental Designs Or Analyses: The experimental designs and analyses appear sound and valid. The authors have included a number of strong and relevant baselines for comparison, including: * Alpha101 (Human-designed alphas): Representing human expert knowledge, providing a benchmark for machine-driven discovery. * MLP, XGBoost, LightGBM (ML-based non-formulaic alphas): Representing end-to-end machine learning approaches, allowing for comparison against complex, but less interpretable, methods. * GP w/o filter, GP w/ filter (GP-based formulaic alphas): Representing the current mainstream approach for formulaic alpha discovery, allowing for direct comparison to existing formulaic methods. * AlphaGen (RL-based formulaic alphas): Representing the most closely related RL-based baseline, highlighting the incremental improvement of AlphaQCM. Furthermore, the ablation studies are well-designed to isolate the impact of key components of AlphaQCM: * Variance Methods: Comparing QCM variance to no variance and vanilla variance, demonstrating the effectiveness of QCM. * DRL Backbones: Comparing IQN and QRDQN backbones, showing the robustness across different distributional RL algorithms. * Domain Knowledge: Assessing the impact of initializing the replay buffer with expert-designed alphas, providing insights into the role of prior knowledge. * Parameter Size: Analyzing the performance with varying parameter sizes for both AlphaQCM and AlphaGen, demonstrating robustness and consistent outperformance. The use of 10 random seeds for each experimental setting to account for stochasticity is also a good practice. The results are presented clearly, with means and standard deviations of IC values, enabling a robust comparison of the methods. Supplementary Material: Yes, I reviewed the supplementary material and code files. The supplementary material appears comprehensive and well-organized, providing necessary details to understand and potentially replicate the work. Relation To Broader Scientific Literature: The key contribution of this paper is primarily related to the scientific literature within the domain of algorithmic trading and quantitative finance, specifically in the sub-area of formulaic alpha discovery. The paper builds upon and extends the existing literature on using machine learning, and particularly reinforcement learning, for financial signal generation. It directly relates to prior work that utilizes genetic programming and reinforcement learning for discovering formulaic alphas, such as the AlphaGen method. While the paper makes a valuable contribution to this specific niche, its broader contribution to the general machine learning literature might be considered somewhat limited. The core methodological novelty lies in the application of QCM for variance estimation within a distributional RL framework in the context of alpha discovery. While this is a technically sound and empirically effective approach for the specific problem, it might not represent a fundamentally new machine learning concept with broad applicability across diverse ML domains. However, the effective demonstration of addressing non-stationarity and reward sparsity in a real-world financial application is a valuable contribution to the intersection of ML and finance. Essential References Not Discussed: No, based on my current understanding of the literature related to formulaic alpha discovery and reinforcement learning in finance, there do not appear to be any essential references that are critically missing from the paper. The paper cites relevant works on distributional RL, quantile regression, genetic programming for alpha discovery, and related RL-based alpha generation methods. The references provided seem to adequately contextualize the key contributions within the relevant scientific domain. Other Strengths And Weaknesses: Strengths: * Originality: The application of the QCM method within a distributional RL framework to address non-stationarity and reward sparsity in alpha discovery is a novel and original contribution. * Significance: The paper addresses a practically significant problem in quantitative finance – the efficient and robust discovery of formulaic alphas, particularly in complex and dynamic markets. The demonstrated performance improvement over strong baselines highlights the practical significance of AlphaQCM. * Clarity: The paper is generally well-written and clearly explains the proposed method, experimental setup, and results. The figures and tables are informative and contribute to the clarity of presentation. The supplementary material further enhances clarity and reproducibility. * Empirical Validation: The extensive empirical validation on real-world datasets, including ablation studies and comparisons to multiple strong baselines, is a significant strength. Weaknesses: * Data Source Detail: While the paper mentions Chinese A-share stock market datasets, it lacks specific details about the exact data sources used (e.g., specific data vendors, data cleaning procedures). This could slightly hinder reproducibility. * "Market" Definition: The definition of the "Market" dataset is not explicitly clarified. It is mentioned as "all stocks," but it's unclear if this refers to CSI-1000 or a different universe of stocks. Clarifying this definition would improve clarity. * Geographic Limitation: As mentioned earlier, the evaluation is limited to the Chinese stock market. The generalizability to other markets needs further investigation to fully assess the robustness of AlphaQCM. * Interpretability of Discovered Alphas: While the paper emphasizes formulaic alphas as being more interpretable than black-box ML models, it doesn't deeply explore the actual interpretability of the alphas discovered by AlphaQCM or provide examples of the discovered formulas and their economic intuition. Other Comments Or Suggestions: No further comments or suggestions at this time. Questions For Authors: See the weakness part above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and positive attitude towards our work. To enhance the quality of this paper, we have carefully considered your suggestions and questions. Below, we provide a point-by-point response to your comments: --- ## Q1 from "Methods And Evaluation Criteria": *However, a notable limitation of the evaluation is its focus solely on the Chinese A-share stock market. ... To strengthen the evaluation, future work could explore the performance of AlphaQCM on datasets from diverse global markets to assess the broader applicability and robustness of the method.* **A1**: To demonstrate the generalizability of our AlphaQCM method, we have conducted additional experiments using the U.S. stock market dataset (S&P 500). The results show that AlphaQCM continues to substantially outperform all competitor methods on both the CSI500 and S&P 500 datasets. For detailed results, please see A2 in the response to Reviewer JbCG. --- ## Q2 from "Theoretical Claims": *While Proposition 3.1 regarding the consistency of moment estimators is mentioned, the paper does not delve deeply into formal proofs or extensive theoretical analysis.* **A2**: Thank you for your understanding regarding our focus on the application. To complete the theoretical foundation, we have provided the technical proof of Proposition 3.1 in this rebuttal. If you are interested, please refer to A1 in the response to Reviewer 5Vkr for the detailed proof. Moreover, we will include the proof in the Appendix in the revised version. --- ## Q3 from "Other Strengths And Weaknesses": *While the paper mentions Chinese A-share stock market datasets, it lacks specific details about the exact data sources used (e.g., specific data vendors, data cleaning procedures). This could slightly hinder reproducibility.* **A3**: Thank you for your insightful suggestion. In this paper, we use the Chinese stock data from the Baostock database and the U.S. stock data from WRDS database. The data cleaning procedures follow those outlined in [1], and the corresponding codes are provided for reproducibility. --- ## Q4 from "Other Strengths And Weaknesses": *The definition of the "Market" dataset is not explicitly clarified. It is mentioned as "all stocks," but it’s unclear if this refers to CSI-1000 or a different universe of stocks. Clarifying this definition would improve clarity.* **A4**: Apologies for the confusion. Regarding the "Market" stock pool, we refer to all stocks listed on the Shanghai and Shenzhen Stock Exchanges. We will clarify this definition in the revised version to ensure better clarity. --- ## Q5 from "Other Strengths And Weaknesses": *As mentioned earlier, the evaluation is limited to the Chinese stock market. The generalizability to other markets needs further investigation to fully assess the robustness of AlphaQCM.* **A5**: Please refer to A1 above on this matter. --- ## Q6 from "Other Strengths And Weaknesses": *While the paper emphasizes formulaic alphas as being more interpretable than black-box ML models, it doesn’t deeply explore the actual interpretability of the alphas discovered by AlphaQCM or provide examples of the discovered formulas and their economic intuition.* **A6**: Thank you for your insightful question. To be honest, the discovered alphas can vary across different random seeds, and interpreting these alphas often requires solid financial background knowledge. As a result, we have chosen not to delve into this aspect in the manuscript. To address your question, we provide two easy-to-interpret discovered alphas based on the Market dataset: 1. $$\text{RANK(Volume} \times \frac{\text{HIGH}}{\text{VWAP}} + \text{CONSTANT}) / \text{CLOSE}$$ 2. $$\text{WMA(ABS}\left(\frac{\text{CLOSE}}{\text{OPEN}} \times \text{CONSTANT}\right), 30)$$ --- **Explanation**: - In the first discovered alpha, $\frac{\text{HIGH}}{\text{VWAP}} \times \text{Volume} + \text{CONSTANT}$ can be interpreted as a volume-adjusted price upside variation. The calculated values are then cross-sectionally normalized using the RANK operator and further normalized by the closing price. This alpha suggests that future stock trends are influenced by intraday price variation. - On the other hand, the second alpha measures the absolute values of intraday returns and smooths them over a 30-day window via the WMA operator. This can be viewed as a special form of short-term reversal [2], a widely recognized formulaic alpha in finance. --- ## References [1] Yu, S., H. Xue, X. Ao, F. Pan, J. He, D. Tu, and Q. He (2023). Generating synergistic formulaic alpha collections via reinforcement learning. *In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD '23*, pp. 5476–5486. Association for Computing Machinery. [2] Jegadeesh, N. (1990). Evidence of predictable behavior of security returns. *The Journal of Finance*, 45, 881–898.
Summary: This paper proposes a distributional reinforcement learning-based alpha discovery process for algorithmic trading in the stock market. Motivated by the quantile conditional moments (QCM) method, the authors provide an unbiased estimation of variance from quantiles to improve the performance of discovering synergistic formulaic alphas. This estimated variance is used to the bonus of the Q values on the action selection process in RL training. In experiments, the proposed method outperforms several traditional baselines and AlphaGen algorithm in the offline RL trading benchmarks. Claims And Evidence: The authors present a theoretical statement in Proposition 3.1 to estimate the unbiased variance for the bonus in Equation (5), which is the main claim of this paper. However, they provide this proposition without any proof. If there is the proof, please let me know. The second problem is that the target tasks, CSI300,500 seems offline RL frameworks, but the proposed method is an online RL algorithm. It is not clear that why the proposed method works in this offline RL setting.. However, they provide this proposition without any proof. Methods And Evaluation Criteria: The key evaluation criteria of this paper is IC, information coefficient, which is based on the Pearsons' correlation coefficient. However, the paper does not provide any detailed explanation of this metric, and it is hard to understand what is the exact meaning of out-of-sample IC values in Tables. In addition, the previous baseline, alphagen also provides Rank IC to help the readers to check the performance of algorithms. I think the authors need to add this details to improve the quality of this submission. Theoretical Claims: I have tried to check the correctness of theoretical claims, but the authors does not provide the proof of proposition 3.1. If any, please let me know. Experimental Designs Or Analyses: The basic experimental design is followed by alphagen paper, which seems valid. However, there is no enough explanation of the specific metric, such as out-of-values and the ablation study to help the readers to understand how the proposed bonus works to find the better alpha. Supplementary Material: I review all appendix in the paper and the supplementary files which contains the implementation code. Relation To Broader Scientific Literature: This work is for algorithm trading in finance to achieve the better profit. Essential References Not Discussed: Although the authors provide the key references, QCM methods and Cornish-Fisher expansion, the detailed explanation is not enough. Other Strengths And Weaknesses: Using the distributional RL properties to analyze the uncertainty of discovering alpha is quite interesting to me, but the quality of the paper should be elaborated to satisfy the standard of this venue. Other Comments Or Suggestions: It seems to provide more baselines in alphagen, such as PPO_filters to compare the performances of RL based algorithms. Questions For Authors: A I mentioned above, can you provide the proof of proposition 3.1 and the more details of equation (5), which is the key part to explain how the prosed method works in the action selection process. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you very much for your insightful comments and suggestions, which give us a great help to improve our article in the future. Limited by the max length, we hope you will find that our responses have successfully addressed the important issues you have raised. --- ## Q1 *The authors present a theoretical statement in Proposition 3.1 to estimate the unbiased variance for the bonus in Equation (5), which is the main claim of this paper. However, they provide this proposition without any proof.* **A1:** Thank you for your thoughtful comment. We did not include the detailed proof in the original submission, as this paper is primarily application-driven. Indeed, the proof is relatively straightforward. ***We now provide the technical proof of Proposition 3.1 below and will include it in the appendix in the revised version.*** To begin, we recall the linear regression model (4) from the manuscript: $$\hat{\theta}_k(x, a)=\zeta(x, a)+Q^*(x, a)+\Phi'_k\delta(x, a)+\varepsilon_k(x, a),$$ from which we have the matrix form: $$\hat{\theta}(x, a)=\Phi\Delta(x, a)+\varepsilon(x, a),$$ where $\hat{\theta}(x, a)=(\hat{\theta}_1(x, a),\dots,\hat{\theta}_K(x, a))'$, $\Delta(x, a)=(\zeta(x, a)+Q^*(x, a),\delta(x,a)')'$, $\Phi$ and $\varepsilon(x, a)$ are defined in the Appendix D of manuscript. Next, assuming Assumption D.1 hold, the least squares estimator for $\Delta(x, a)$ is given by: $$\hat{\Delta}(x, a)=(\Phi'\Phi)^{-1}\Phi'\hat{\theta}(x, a).$$ Accompanied with Assumption D.2 and Slutsky’s theorem, we can derive: $$\hat{\Delta}(x, a)-\Delta(x, a)=(\Phi'\Phi)^{-1}\Phi'[\Phi\Delta(x, a)+\varepsilon(x, a)]-\Delta(x, a)=(\Phi'\Phi)^{-1}\Phi'\varepsilon(x, a)= (\frac{\Phi'\Phi}{K})^{-1}\frac{\Phi'\varepsilon(x, a)}{K}\xrightarrow{p}0.$$ It indicates the consistency of $\hat{\Delta}(x, a)=(\hat{\delta}_1(x, a), \hat{\delta}_2(x, a), \hat{\delta}_3(x, a), \hat{\delta}_4(x, a))'$. In other words, we have $\hat{\delta}_1(x, a)\xrightarrow{p}\zeta(x, a)+Q^*(x, a)$, $\hat{\delta}_2(x, a)\xrightarrow{p}\sqrt{h(x, a)}$, $\hat{\delta}_3(x, a)\xrightarrow{p}\frac{\sqrt{h(x, a)}s(x, a)}{6}$ and $\hat{\delta}_4(x, a)\xrightarrow{p}\frac{\sqrt{h(x, a)}[k(x, a)-3]}{24}.$ Finally, applying continuous mapping theorem, we obtain $\hat{h}(x, a)=\hat{\delta}_2^2(x, a)\xrightarrow{p}h(x, a)$, $\hat{s}(x, a) = \frac{6\hat{\delta}_3(x, a)}{\hat{\delta}_2(x, a)} \xrightarrow{p} s(x, a)$, and $\hat{k}(x, a) = \frac{24\hat{\delta}_4(x, a)}{\hat{\delta}_2(x, a)} + 3 \xrightarrow{p} k(x, a)$. This completes the technical proof of Proposition 3.1. --- ## Q2 *The second problem is that the target tasks seems offline RL frameworks, but the proposed method is an online RL algorithm.* **A2:** While the stock data is fixed during the learning process, the alpha discovery task should be solved in an online framework. ***The key reason is that all experiences (i.e., the tuples of state, action, next state, and reward) are collected through agent-environment interaction, rather than being pre-provided.*** As noted in Levine et al. (2020), an online RL algorithm updates the policy with streaming data collected by the agent itself. In contrast, offline RL uses a dataset of experiences collected by an external (potentially unknown) behavior policy, and this dataset is not altered during training. --- ## Q3 *The key evaluation criteria of this paper is IC, ... However, the paper does not provide any detailed explanation ...* **A3:** As a standard metric for evaluating alpha discovery ability, IC is basically time-series averaged Pearsons’ correlation. A higher IC value indicates that the set of discovered alphas is more powerful in predicting stock trends, which, in turn, can lead to greater financial profits for real traders. --- ## Q4 *In addition, the previous baseline, alphagen also provides Rank IC to help the readers to check the performance ..., It seems to provide more baselines in alphagen, such as PPO\_filters to compare the performances of RL based algorithms.* **A4:** Limited by the max length of rebuttal, we have included the out-of-sample IC and Rank IC values for AlphaGen, AlphaQCM and PPO in the following table, where values inside parentheses are the standard deviations. As shown, the AlphaQCM method still demonstrates significant advantages over the comparison methods, even though achieving a high Rank IC value is not the primary goal. |Method|CSI300 IC|CSI300 RankIC|CSI500 IC|CSI500 RankIC|Market IC|Market RankIC| |-|-|-|-|-|-|-| |PPO w/ filter|1.14(1.71)|3.02(2.88)|0.98(1.36)|2.79(1.46)|2.15(1.86)|2.58(1.04)| |AlphaGen|8.13(0.94)|9.40(1.10)|8.08(1.23)|8.71(1.67)|6.04(1.78)|7.49(2.25)| |AlphaQCM|8.49(1.03)|9.88(1.35)|9.55(1.16)|9.24(1.53)|9.16(1.61)|9.71(2.02)| --- ## Due to length limitations, if you want, we can answer your unresolved questions in the next phase. [1] Levine, S., A. Kumar, G. Tucker, and J. Fu (2020). Offline reinforcement learning: Tutorial, review, and perspectives on open problems.
null
null
null
null
null
null
Certified Unlearning for Neural Networks
Accept (poster)
Summary: This paper proposes to analyze the formal unlearning guarantees of two varieties of clipped noisy finetuning (either model or gradient clipping) by using recent post-processing DP analyses. Specifically they propose to first project and add noise to the original model, and then apply $T$ steps of clipped (either model weights or gradients) noisy SGD on the retain dataset. The analysis then follows from several techniques of privacy amplification from the initial DP guarantee given by the initial projection and Gaussian noise. This is claimed to be the first algorithm agnostic to the original training algorithm. However this is for a definition of unlearning that is different to the unlearning definitions used in past work. Experimental results for the method are presented for CIFAR10 and MNIST, and they observe they improve efficiency over retraining from scratch (though retraining from scratch provides different unlearning guarantees). ## Update after Rebuttal The rebuttal helped clarify the literature from which the proposed definition came from, and I raised my initial score given this. The authors now also provide experiments comparing to DP-SGD and on more datasets, and I raised my score once more to an accept given this. While I did not follow why there was additional fine-tuning for DP-SGD in this comparison, I trust the authors will explain more in the camera-ready. I hope the authors will incorporate much of our discussion into the camera-ready, as to also help future readers understand the subtle differences between the various unlearning definitions, and acknowledge potential limitations (which are open-problems). In particular, that this post-processing definition is weaker, but it may still be enough for some settings. I now believe one of the key contributions of this paper is motivating future study on this definition, and perhaps stating this explicitly will help future readers. For example, maybe something along the lines of "We hope future work studies applications for post-processing unlearning, given this paper showed that it allows for more efficient unlearning, with guarantees, and agnostic to the original training algorithm." Claims And Evidence: The paper claims Definition 2.1 paraphrases the DP inspired unlearning definitions presented in (Ginart et al 2019) and (Guo et al, 2019). This is incorrect, as in those papers $\bar{A} = A$, i.e., the certifying algorithm must be the original training algorithm, while in this paper they let $\bar{A}$ be free and implicitly use $\bar{A} = U(A(D \setminus D_F), D \setminus D_f, \emptyset)$. Note $\bar{A} = A$ is widely the definition used for unlearning, even in adaptive settings: see “Adaptive Machine Unlearning” (Gupta et al, 2021). To make their method fit the unlearning definition of past work, we would need the original training algorithm be $A = U(A(D \setminus D_F), D \setminus D_f, \emptyset)$, which in the context of the methods in the paper, requires an additional T steps of projected noisy SGD after the original training run. So their algorithm does require assumptions on training (the authors claim otherwise). However, the impact of this additional assumption on the performance of the models and compute to unlearn is not evaluated. This discrepancy also means the proposed methods should be compared to past certified unlearning work that also modify the training algorithm to have faster unlearning to justify the improvements of the method: e.g., SISA as proposed by (Bourtoule et al, 2021), or naive DP training. No such comparisons are made. I point the authors to “On The Necessity of Auditable Algorithmic Definitions for Machine Unlearning” (Thudi et al, 2022) for results motivating why unlearning is widely defined with a fixed training algorithm, and is not just a property of the final model. Methods And Evaluation Criteria: The evaluation does not capture the fact the method requires modifications to the training algorithm. Specifically: 1) No comparison to other certified approaches which modify the training algorithm are made 2) No analysis/experiments of the impact on the required additional noisy projected fine-tuning is presented Theoretical Claims: See claims and evidence for issues with the unlearning definition and how it then presents inconsistencies with past work. This said, I believe the proofs are correct given the training algorithm ends with $T$ steps of clipped (either gradient or model) noisy fine-tuning on the training dataset. I checked the main proofs of Theorem 4.1 and 4.2. Experimental Designs Or Analyses: I have several questions regarding how hyperparameters were selected, and the apparent weak performance of the models on CIFAR10 and MNIST. In particular, the model used only reaches $\sim 55$% accuracy on CIFAR10 and $\sim 80$% on MNIST, while reaching $>90$% is widely standard in the literature for both datasets when using ResNets. Given their unlearning algorithm actually presents changes to the training algorithm, I believe further exploration of the impact to performance of their method is necessary (as we cannot assume it will provide the same performance as current SOTA training algorithms). Supplementary Material: I looked over the proofs and the tables. This presented concerns and questions raised earlier in my review. Relation To Broader Scientific Literature: Unlearning is proposed as a technique to meet privacy and copyright legislation ( Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice” (Cooper et al., 2024) ) to address and detect data poisoning (“Threats, attacks, and defenses in machine unlearning: A survey” (Liu et al., 2024) ), amongst other concerns requiring changes to datasets. In the context of the unlearning literature, this paper tackles an important and novel problem of providing an algorithm with unlearning guarantees in novel settings (however claims need to be clarified). Essential References Not Discussed: No essential references missing that I noticed, though as pointed earlier, past work is misrepresented. Other Strengths And Weaknesses: Strengths: 1) The proof techniques seemed a novel application of past ideas (despite the correction needed) 2) If the claims are corrected as suggested below, I believe it is possible this also improve the state of certifiably unlearning large groups of data with modifications to the training algorithms (where SISA does not scale well) Weaknesses: 1) The current claims are incorrect and consequently misrepresent the past iterature on unlearning Other Comments Or Suggestions: What follows are potential changes to the paper to remedy issues with the current claims. I am willing to revisit my score if the the issues I've raised regarding the claims are addressed. 1) Rephrase Definition 2.1 to be the same as past work, and state explicitly what training algorithm you are unlearning in the theorem statements. Alternatively, present the definition as a new unlearning definition, however given past negative results on auditing unlearning using just model weights, I currently find this definition hard to justify. 2) Provide experiments exploring the impact of the additional noisy clipped fine-tuning needed to the training algorithm across models and datasets. In particular consider evaluating performance degradation, and account for the additional costs to training this presents compared to naively retraining. I believe it simply doubles the current cost analysis, which would mean the method is less efficient than retraining, but the additional training cost is a one-time cost and may be less important over many unlearning requests. 3) Given the method requires modifications to training, provide a more detailed comparison to the standard (modifying training) exact unlearning algorithm SISA. Note Figure 6 in (Bourtoule et al, 2021) suggests it can cut unlearning costs by $⅓$ with minimal performance degradation for image classification, though as the size of the forget set grows their unlearning cost increases and so eventually the methods in this paper could be more efficient. Questions For Authors: I described my main concerns in the previous sections, but now list specific (but more minor) questions: 1) Why is the accuracy on CIFAR10 and MNIST much lower than standard ResNet results? I understand the architecture is relatively small to the ResNets commonly used, but is there a specific reason not to use ResNet for these experiments? My current hypothesis for the results could be that the method suffers similar performance degradation as DP training. 2) How were the hyperparameters chosen for the figures in the main body? I found the tables of hyperparameters in the appendix but did not connect how the tables were used to choose the final hyperparameters. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your time and valuable comments that will allow us to improve our manuscript. ### **Clarification on Definition 2.1:** We apologize for the confusion caused. We believe that the reviewer misinterpreted our definition. We would like to note that **no modification to the training algorithm is required in our work**. Our Definition 2.1 is intended as a general and unifying framework that covers multiple previous definitions of unlearning. We want to clarify that in Definition 2.1. we did not predefine the certifying algorithm $\bar A$ for generality. We only ask for the existence of such a certifying algorithm $\bar A$ and its independence from the dataset $D_f$. Specifically, our definition captures: * (Ginart et al., 2019; Guo et al., 2019), where the certifying algorithm $\bar A$ equals the training algorithm $A$. * (Sekhari et al., 2021; Allouah et al., 2024), where $\bar A(.) = U(A(.), . ,∅)$. In our work we provide certification with $\bar A(.) = U(A(.), . ,\varnothing)$. A similar choice was made in several prior works, such as (Sekhari et al, 2021) and (Allouah et al, 2024), and therefore is not a novelty of our work. Such a choice allows us to prove the unlearning guarantees of our algorithms (3) and (4). Importantly, our choice of the certifying algorithm $\bar A$ is purely theoretical and does not require running additional computational steps in practice. Thus, there is no practical modification of the original training required. Consequently, comparisons with methods that fundamentally modify training (e.g., SISA or naive DP training) fall outside our experimental setup. We appreciate your suggestion about explicitly clarifying these points in the manuscript. We will revise Definition 2.1 and associated discussions clearly stating how our definition captures prior definitions and the choice of the certifying algorithm $\bar A$. ### **About Thudi et al. (2022):** We would like to thank the reviewer for pointing to the related work. We will add it to the next version of our paper. Considering the adversarial setting with server possibly forging unlearning as was done in (Thudi et al, 2022) is an interesting problem but beyond the scope of our work. In our work, we focus on developing the algorithms for how to achieve unlearning in the non-adversarial (honest server) setting, a problem that has previously remained unsolved for general non-convex functions. Providing a certification algorithm for our approach in the adversarial setting is an interesting direction for future work but orthogonal to current work. ### **Responses to Specific Questions:** 1. **Accuracy on CIFAR-10 and MNIST:** Our reported accuracy is indeed lower than typical ResNet results because we employed much smaller neural networks. Higher-dimensional settings suffer performance degradation similar to standard DP-training methods. We will highlight this in the paper, and we acknowledge that future experiments with larger, state-of-the-art architectures are certainly valuable. 2. **Hyperparameter selection:** The hyperparameters were tuned based on standard grid search procedures to balance performance guarantees, while strictly following the theoretical guidelines relating the parameters (e.g., Theorem 4.1 indicates the noise magnitude given the number of iterations). We list the result of our grid search in Appendix B. We will provide explicit explanations in the revised manuscript. We hope that we have successfully clarified any confusion regarding our definition 2.1 and addressed the concerns raised by the reviewer. If this is the case, we kindly ask the reviewer to reconsider their evaluation and raise their score accordingly. --- Rebuttal Comment 1.1: Comment: Thank you for your response! I now understand the context for the definition, and thank the authors for their detailed response. To summarize my current understanding of the main contribution of the paper, this paper shows that for this weaker "post-processing" unlearning definition (as in methods satisfying Ginart et al., also satisfy this definition, but not vice-versa) we do not need to put restriction on the original training algorithm. I am raising my score given this clarification. However, I emphasize again that the authors should explicitly clarify in the paper that this is a weaker unlearning definition than considered in a large part of the literature. Importantly, a now open problem is understanding use-cases for this "post-processing" unlearning (e.g., when someone asks for their data to be removed, is it enough for it to look like what we would have post-processed if it was not in the dataset?). Also, I still find an issue with the evaluation is that DP-SGD and SISA are baseline methods satisfying this definition, neither of which are compared to. The authors argue DP-SGD and SISA approaches fall outside the scope of their paper as they modify the training algorithm. However, I find the current experiment suite does not answer what we gain/lose by doing this post-processing method (relative to approaches that do change the learning algorithm/are not post-processing). The authors mention they expect to suffer similar trade-offs as DP-SGD, and will discussion this as future work. I wish to emphasize that SISA would seemingly not suffer such a trade-off, and so the (practical) benefits of this method compared to it are less clear beyond not modifying the training algorithm (but this seems a weak statement as one still needs to post-process and lose performance). Ultimately having the experiments to quantitatively show the limits of this method would strengthen the paper, in my opinion. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their thoughtful feedback. To address concerns around evaluation in more complex settings, we conducted **new experiments** on CIFAR-100 and CIFAR-10 using **ResNet architectures** pretrained on public data (ImageNet). This setup, where unlearning is applied to the last few layers of a pretrained model, has become standard in recent certified approximate unlearning works (e.g., Guo et al. 2020, Chien et al. 2024). However, prior works restrict themselves to convex settings (i.e., linear final layer), whereas our method is **the first to provide certified unlearning guarantees for multiple non-convex layers, without any smoothness/convexity assumptions.** More precisely, we remove the last layer of ResNet-18 (pretrained on public data) and replace it with a 3-layer fully connected neural network. We first train the last 3 layers of our resulting architecture on the full data, and then unlearn the forget data from these 3 layers. To demonstrate practical effectiveness, we compare our method against **DP-SGD (ε = 50)**, as suggested by reviewers, and **retraining**, while maintaining a much stricter ε = 1 guarantee. DP-SGD enforces privacy before unlearning, followed by additional fine-tuning during unlearning. As shown below, similar to Table 2 in the paper, our method consistently requires fewer epochs-- up to **2–3× less compute** than DP-SGD, and faster than retraining in high-accuracy regimes. The tables report the number of training epochs needed to reach each target accuracy. ### **CIFAR-100** | Accuracy | Gradient Clipping (ours) | Retrain | DP-SGD | |:-----------|--------------------:|:-------------------|:--------------------| | 50% | 14 | 17 (≈ 18% slower) | >50 (> 72% slower) | | 53% | 18 | 20 (≈ 10% slower) | >50 (> 54% slower) | | 55% | 20 | 22 (≈ 9% slower) | >50 (>60% slower) | | 58% | 26 | 29 (≈ 10 % slower) | >50 (>48% slower)| | 60% | 32 | 34 (≈ 6% slower) | >50 (>36% slower) | | 62% | 39 | >50 (> 22 %slower) | >50 (> 22 %slower) | ### **CIFAR-10** | Accuracy | Gradient Clipping (ours) | Retrain | DP-SGD | |:-----------|:--------------------|:-------------------|:-------------------| | 85% | 9 | 10 (≈ 10 % slower) | 17 (≈ 47 % slower) | | 86% | 14 | 17 (≈ 18 % slower) | 22 (≈ 36 % slower) | | 87% | 21 | 28 (≈ 25 % slower) | 35 (≈ 40 % slower) | | 88% | 39 | >50 (> 22% slower) | >50 (> 22 % slower) | These results demonstrate that our method achieves significant gains in both privacy and efficiency, even outperforming DP-SGD under a much tighter certificate $\epsilon$. Crucially, we do so **without modifying the original training process**, placing our work in the post-processing regime. Finally, we view our method as orthogonal to SISA. It could be used to improve shard retraining when approximate unlearning is sufficient, though combining both directions is beyond the scope of this work. While certified unlearning in non-convex settings remains an open challenge, we believe this work represents a **major step forward**, bridging the gap between formal guarantees and practical applicability in deep learning. We hope that these additional results and clarifications effectively address your concerns, and we would greatly appreciate it if you would consider raising your score based on this.
Summary: The paper presents a post processing technique to guarantee approximate unlearning in non-convex settings. The paper builds on several works in the differential privacy literature that incorporated noise during the optimization process to improve privacy guarantees. The paper proposes two methods: gradient clipping and model clipping to achieve unlearning. The paper theoretically computes the degree of noise required in each setting as a function of clipping and number of iterations. Further results show that the proposed method requires a significantly smaller amount of noise per iteration compared to the baselines. Claims And Evidence: Strengths: - The paper is well motivated and very easy to follow. - The paper proposes a simple method for approximate unlearning backed by theoretical guarantees that extend to non-convex settings. This removes the need to know several characteristics of the existing loss function like the smoothness constant. Methods And Evaluation Criteria: Weaknesses: - The evaluation of the paper seems limited as the experiments only focus on MNIST and CIFAR10 datasets, and use very small scale neural networks. I would like to see results using larger models and more complex datasets. Specifically, I'm interested to know if the updates could be applied to a subset of the model parameters to achieve $(\epsilon, \delta)$ unlearning. - An obvious baseline that I would like to see is DP-SGD. What if we trained the model from scratch using differential privacy? How does the unlearning performance compare with DP-SGD? - The paper should report more experiments using different levels of $\epsilon$. Report the final accuracy and the number of iterations required for different $\epsilon$. - In Table 2, we observe that the proposed method requires a large number of fine-tuning steps that is comparable to retraining the model from scratch. How does this compare with exact unlearning systems? These systems reduce the unlearning steps by using modular systems and can retain high performance. I understand that this is a different paradigm of unlearning, but it would be interesting to discuss and compare the shortcomings of each unlearning technique. [1] https://arxiv.org/abs/1912.03817 [2] https://arxiv.org/pdf/2406.16257 Theoretical Claims: The theoretical claims in the paper look good to me. However, I'm not an expert in differential privacy and the proofs of the theoretical claims are beyond the scope of my expertise. Experimental Designs Or Analyses: NA Supplementary Material: I have skimmed over some parts of the theoretical proof. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: Please respond to the weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and constructive feedback. 1. We acknowledge that our experiments currently focus on fundamental settings (MNIST, CIFAR10), yet our primary contribution lies in providing rigorous theoretical guarantees without restrictive assumptions such as smoothness or convexity. 2. DP-SGD changes the training algorithm fundamentally and, therefore, does not fit into our setting. Moreover, if we were to implement the DP-SGD baseline, we would need the per iteration noise to be scaled linearly with the forget set size, compared to the standard DP training. Since our forget set size is 5000 for CIFAR-10 and 6000 for MNIST, we do not expect any good practical performance. There is also theoretical evidence that unlearning for free via DP is severely limited in high dimensions (Allouah et al, 2024). 3. Including additional experiments varying $\varepsilon$ is a great suggestion. We will include such results clearly in our revised paper. We do not expect any significant difference in the relative performance. 4. We appreciate the suggestion regarding exact unlearning systems, which indeed modify the training algorithm. We are unaware of any exact unlearning system that does not modify original training, except for retraining from scratch. We will add a discussion on this alternative approach to the related work of our paper. The exact unlearning systems typically modify the training, losing the performance quality of the model e.g., SISA (Bourtoule et al., 2019), however, they might be more efficient when unlearning.
Summary: The paper proposes a novel certified unlearning method that integrates noisy fine-tuning with privacy amplification by stochatic post-processing, which introduces gradient clipping and model clipping, both combined with Gaussian privacy noise. The authors provide rigorous theoretical analysis for unlearning guarantees that do not depend on restrictive assumptions. Empirical results demonstrate the effectiveness of the proposed method. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The reviewer did not check the correctness of the proofs. Experimental Designs Or Analyses: The reviewer has checked all of the experimental designs. Supplementary Material: The reviewer read the experiments part of the supplementary material. Relation To Broader Scientific Literature: This paper presents work whose goal is to advance the field of machine unlearning, which is specifically oriented to improve the trustworthiness of machine learning. Essential References Not Discussed: The paper focuses on the field of certified machine unlearning. However, it only cites certified unlearning methods under the conventional convex setting while missing the works for other specific settings, e.g., graph neural networks [1] and minimax models [2]. [1] Eli Chien, Chao Pan, and Olgica Milenkovic. “Certified Graph Unlearning”. [2] Jiaqi Liu, Jian Lou, Zhan Qin, and Kui Ren. “Certified Minimax Unlearning with Generalization Rates and Deletion Capacity”. In NeurIPS 2023. Other Strengths And Weaknesses: Strengths: 1. The unlearning guarantees do not depend on restrictive assumptions such as loss function smoothness. The theoretical analysis is rigorous. 2. The paper is overall well-structured. The narrative is easy to follow. Weaknesses: 1. The network used in the experiments is simple, and the datasets are small. Although this paper mainly focuses on the theoretical part, it would be better to include the experimental results on larger datasets, e.g., ImageNet. Other Comments Or Suggestions: 1. Lack of explanation of notations ($C_0, C_1, C_2, \lambda$ and $\gamma$) in the table caption. Questions For Authors: It seems that the regularization factor $\gamma$ does not affect the unlearning guarantee results of model clipping. Please explain the role of $\gamma$ here. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and constructive feedback. 1. Thanks for pointing to the papers about unlearning for graph neural networks and minmax models. We will add these references to the related works section. 2. We acknowledge that our experiments currently focus on fundamental settings (MNIST, CIFAR10), yet our primary contribution lies in providing rigorous theoretical guarantees without restrictive assumptions such as smoothness or convexity. 3. We will add the explanation of notations $C_0,C_1,C_2, \lambda$, and $\gamma$ to the table caption. 4. Regularization factor $\gamma$: Thank you for raising this question. The regularization factor $\gamma$ plays a crucial role in our theoretical analysis, related to privacy amplification via iteration (see Sec. 4.1), as the required noise magnitude decreases exponentially in the number of iterations $T$ thanks to regularization. In practice, it helps control the norm of the model parameters throughout fine-tuning, which directly influences the amount of noise required per iteration to ensure certified unlearning guarantees. We will clarify this role of $\gamma$ explicitly in the revised manuscript.
Summary: **Main Results**: Although the idea of fine-tuning an originally trained model on retained data has been proposed before, it has traditionally been viewed as an empirical forgetting strategy for non-convex tasks. This paper provides certified unlearning guarantees for neural networks without requiring knowledge of the smoothness constant of the loss function. **Main Algorithmic/Conceptual Ideas**: The authors propose two clipping-based strategies: gradient clipping and model clipping. The core idea involves either "clip-before-updating" or "update-before-clipping," with the addition of Gaussian noise to the output. **Main Findings**: The authors provide approximate unlearning guarantees for both methods. Then they compare the performance of their methods with the baseline (output perturbation) when achieving the same unlearning guarantee. ## update after rebuttal The authors have addressed most of my concerns by conducting additional empirical evaluations, and the results looks reasonable. Considering the comments and responses from the other reviewers, I have decided to raise my score to "weak accept." Claims And Evidence: The authors offer detailed proofs to support the certified unlearning guarantees of their proposed methods, and upon review, these proofs appear sound and logical. However, the experimental analyses presented are somewhat basic and do not sufficiently explore or demonstrate the effectiveness/feasibility of the proposed approaches in more complicated scenarios. See more details in *Methods And Evaluation Criteria* and *Experimental Designs Or Analyses*. Methods And Evaluation Criteria: The evaluation criteria include: 1) the number of steps required to achieve a fixed target accuracy, and 2) the validation accuracy attained when the number of update steps is fixed. To strengthen their analysis, the authors could provide theoretical justifications regarding the utility and complexity trade-offs, as discussed in [1]. Alternatively, they could offer empirical justifications focusing on relearn time, the accuracy of membership inference attacks (MIA), and the AUC score of MIA, as explored in [2]. [1] Youssef Allouah, et al. "The Utility and Complexity of In- and Out-of-Distribution Machine Unlearning." ICLR 2025. [2] Binchi Zhang, Yushun Dong, Tianhao Wang, Jundong Li. "Towards Certified Unlearning for Deep Neural Networks." ICML 2024. Theoretical Claims: I reviewed the proofs of Theorem 4.1 and Theorem 4.2, and they seemed logical and coherent to me. However, I must admit that I did not examine their correctness in great detail. Experimental Designs Or Analyses: + The authors primarily compare their methods to output perturbation and the most naive baseline method, 'retrain.' However, it would be beneficial to include comparisons with other related methods that also provide certified unlearning guarantees, such as using Newton updates [1], Fisher forgetting [2], and state-of-the-art methods proposed in [3] and [4]. + The authors do not conduct experiments in more practical settings to verify the feasibility of their proposed methods. For example, - the sequential setting where users can send unlearning requests at different time points in sequence - the microbatch deletion setting where the size of the forget set varies, such as 0.1%, 1%, 10%, etc. + Regarding the statement on page 7: “the privacy target is reached before exhausting the iteration budget, in less than 100 iterations,” the terms “privacy target” and “iteration budget” are somewhat confusing. It seems these terms have not been clearly defined earlier in the text. - What is the relationship between these terms and the “accuracy target” and “compute budget” mentioned later? - Does the "privacy target" refer to the “(ε, δ)-unlearning guarantee,” and does the "iteration budget" refer to the number of iterations required to reach the target accuracy? + In Figure 2, if I understand correctly, the accuracy improvement seems to primarily result from standard fine-tuning on the retained set, as the accuracy after the noisy steps drops to nearly zero. Moreover, from the convergence curve, it appears that this "first-noisy-then-standard" fine-tuning procedure only results in less than a 0.1 accuracy improvement compared to retraining from scratch, however, with a similar computational cost and more significant fluctuations (especially during the early unlearning epochs) . The authors are expected to provide more justifications for those findings. References:\ [1] Certified Data Removal from Machine Learning Models. \ [2] Golatkar, A., Achille, A., Ravichandran, A., Polito, M., and Soatto, S. (2021). Mixed-privacy forgetting in deep networks. ICCV. \ [3] Youssef Allouah, et al. "The Utility and Complexity of In- and Out-of-Distribution Machine Unlearning." ICLR 2025. \ [4] Binchi Zhang, Yushun Dong, Tianhao Wang, Jundong Li. "Towards Certified Unlearning for Deep Neural Networks." ICML 2024. Supplementary Material: I reviewed the details of Section B, which contains additional experimental results, and I quickly skimmed through Section A, which includes the proofs of the theorems, in the supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: It appears that the authors have overlooked a highly relevant paper that addresses a similar problem: [1] Binchi Zhang, Yushun Dong, Tianhao Wang, Jundong Li. "Towards Certified Unlearning for Deep Neural Networks." ICML 2024. Other Strengths And Weaknesses: **Originality**: The problem of establishing a certified unlearning guarantee for fine-tuning-based methods in non-convex cases is well-motivated. **Clarity**: As noted in the previous comments, the paper uses somewhat vague language and lacks sufficient theoretical or empirical justification for its methods. Many unclear statements should be clarified in the revised version. Refer to [Questions for Authors] X for specific areas needing improvement. Significance: I think the methods proposed in this paper could be of interest to the machine unlearning community. Other Comments Or Suggestions: A Minor Comment: The authors use several terms—"epoch," "iteration," "time," "compute budget," and "iteration budget"—that seem to convey similar meanings. Could the authors clarify the distinctions between these terms? Questions For Authors: The authors are expected to address all previously mentioned issues, especially the sections on *Methods and Evaluation Criteria* and *Experimental Designs or Analyses.* Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and constructive feedback, and we address the key points below. **1. Baseline Comparisons:** We choose only output perturbation and retraining from scratch since these algorithms are the only baselines in the literature that can achieve certified unlearning without additional assumptions on the loss function or modification of the original training. In more details: * Newton updates [1]: assume **convexity and smoothness** of the loss function, therefore, are inapplicable for the unlearning deep neural networks. * Golatkar et al [2]: impose the **smoothness assumption** on the loss function, which frequently does not hold in deep learning if activation functions are non-smooth. Moreover, their unlearning definition is based on mutual information, and it is non-trivial how to connect it with our differential privacy-based definition of unlearning. * Allouah et al [3]: impose additional assumptions of either **strong convexity and smoothness** or assume **smoothness and that the minima is unique and achievable from any initialization**. Such assumptions do not hold in the deep learning setting. * Zhang et al. [4]: require the loss function to be **L-smooth, as well as the knowledge of the minimal eigenvalue of the hessian**, limiting its applicability to the general deep learning setting. We are unaware of any other work that can tackle certified unlearning in the same generality as our work without limiting additional assumptions on the loss functions. We will add this discussion to the next version of our manuscript. **2. Utility-complexity tradeoff:** Thank you for suggesting a discussion of utility-complexity tradeoffs as explored in [1]. However, such tradeoffs typically require strong assumptions such as convexity or smoothness. Given our primary contribution is providing guarantees for general non-convex settings without these assumptions, clearly characterizing utility-complexity tradeoffs becomes inherently challenging—particularly due to issues like the curse of dimensionality and non-smoothness. We will clarify this inherent difficulty in our revised manuscript. **3. Empirical Justifications (Accuracy and AUC of MIA):** Our goal is to provide rigorous theoretical guarantees without imposing restrictive assumptions such as smoothness or knowledge of eigenvalues of Hessian matrices, as required by [2]. Hence, empirical measures such as MIA scores, used to evaluate methods without strong theoretical guarantees, are not directly comparable or necessary in our theoretical setting. Nevertheless, connecting theoretical unlearning guarantees to practical empirical metrics is an intriguing direction for future exploration, but beyond the scope of this work. **4. Accuracy Improvements (Fig. 2):** Our experimental results (Table 2, Figure 1) show consistent improvement over retraining from scratch across all compute budgets and accuracy targets. Notably, Figure 1 illustrates that with a small compute budget, our method improves test accuracy by up to 8%. The improvement is most significant in low-compute settings, while larger compute budgets make retraining from scratch more effective. In Figure 2, where the compute budget is relatively large, the accuracy improvement is less pronounced. Importantly, both our method and retraining from scratch employ fine-tuning on the retained set. Our approach preserves useful information from the trained model, making fine-tuning more effective. **5. Experimental Setup and Practical Settings:** We agree that extending our experimental framework (e.g., sequential or microbatch deletion settings, as suggested by the reviewer) would provide additional practical insights. While our current focus remains on fundamental theoretical generality, we recognize the value of practical validation and will explicitly outline these as future directions. **6. Terminology Clarification:** We acknowledge the confusion caused by unclear terminology ("privacy target," "iteration budget," etc.). Yes, your interpretations are correct—the "privacy target" refers to the "$(\varepsilon, \delta)$-unlearning guarantee," while the "iteration budget" corresponds to the maximum allowed iterations. We will clearly define these terms in the revised manuscript. ### References [1] Guo et al. Certified Data Removal from Machine Learning Models. ICML 2020. [2] Golatkar et al. Mixed-privacy forgetting in deep networks. ICCV 2021. [3] Allouah et al. "The Utility and Complexity of In- and Out-of-Distribution Machine Unlearning." ICLR 2025. [4] Zhang et al. "Towards Certified Unlearning for Deep Neural Networks." ICML 2024. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your response. I appreciate your efforts to address the review comments. However, I find that the rebuttal does not sufficiently address my initial concerns and provides compelling new evidence to alter my assessment. To strengthen the manuscript, I still recommend implementing **at least one** of the following improvements: 1. Conduct a more comprehensive empirical evaluation (following the experimental setups of the closely related work [1]); 2. Provide formal analysis of either utility guarantees or computational complexity. After reading the rest of reviews and the responses, I decided to maintain my initial rating. [1] Binchi Zhang, Yushun Dong, Tianhao Wang, Jundong Li. "Towards Certified Unlearning for Deep Neural Networks." ICML 2024. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their thoughtful feedback. To address concerns around evaluation in more complex settings, we conducted **new experiments** on CIFAR-100 and CIFAR-10 using **ResNet architectures** pretrained on public data (ImageNet). This setup, where unlearning is applied to the last few layers of a pretrained model, has become standard in recent certified approximate unlearning works (e.g., Guo et al. 2020, Chien et al. 2024). However, prior works restrict themselves to convex settings (i.e., linear final layer), whereas our method is **the first to provide certified unlearning guarantees for multiple non-convex layers, without any smoothness/convexity assumptions.** More precisely, we remove the last layer of ResNet-18 (pretrained on public data) and replace it with a 3-layer fully connected neural network. We first train the last 3 layers of our resulting architecture on the full data, and then unlearn the forget data from these 3 layers. To demonstrate practical effectiveness, we compare our method against **DP-SGD (ε = 50)**, as suggested by reviewers, and **retraining**, while maintaining a much stricter ε = 1 guarantee. DP-SGD enforces privacy before unlearning, followed by additional fine-tuning during unlearning. As shown below, similar to Table 2 in the paper, our method consistently requires fewer epochs-- up to **2–3× less compute** than DP-SGD, and faster than retraining in high-accuracy regimes. The tables report the number of training epochs needed to reach each target accuracy. ### **CIFAR-100** | Accuracy | Gradient Clipping (ours) | Retrain | DP-SGD | |:-----------|--------------------:|:-------------------|:--------------------| | 50% | 14 | 17 (≈ 18% slower) | >50 (> 72% slower) | | 53% | 18 | 20 (≈ 10% slower) | >50 (> 54% slower) | | 55% | 20 | 22 (≈ 9% slower) | >50 (>60% slower) | | 58% | 26 | 29 (≈ 10 % slower) | >50 (>48% slower)| | 60% | 32 | 34 (≈ 6% slower) | >50 (>36% slower) | | 62% | 39 | >50 (> 22 %slower) | >50 (> 22 %slower) | ### **CIFAR-10** | Accuracy | Gradient Clipping (ours) | Retrain | DP-SGD | |:-----------|:--------------------|:-------------------|:-------------------| | 85% | 9 | 10 (≈ 10 % slower) | 17 (≈ 47 % slower) | | 86% | 14 | 17 (≈ 18 % slower) | 22 (≈ 36 % slower) | | 87% | 21 | 28 (≈ 25 % slower) | 35 (≈ 40 % slower) | | 88% | 39 | >50 (> 22% slower) | >50 (> 22 % slower) | These results demonstrate that our method achieves significant gains in both privacy and efficiency, even outperforming DP-SGD under a much tighter certificate $\epsilon$. Crucially, we do so **without modifying the original training process**, placing our work in the post-processing regime. We also thank the reviewer for pointing out the relevance of Zhang et al., ICML 2024, and we will include a proper citation and discussion of this work, similar to what we outlined in our initial rebuttal, in the revised manuscript. While certified unlearning in non-convex settings remains an open challenge, we believe this work represents a **major step forward**, bridging the gap between formal guarantees and practical applicability in deep learning. We hope that these additional results and clarifications effectively address your concerns, and we would greatly appreciate it if you would consider raising your score based on this.
null
null
null
null
null
null
RE-Bench: Evaluating Frontier AI R&D Capabilities of Language Model Agents against Human Experts
Accept (spotlight poster)
Summary: The authors provide 7 research engineering problems together with surrounding environments. They evaluate both humans and AI models (Claude Sonnet 3.5, o1) on these problems, over different amounts of time spent and solution attempts. The results show that over short time horizons, the AI models tend to perform better, whereas humans gain more later on. Overall, the benchmark measures performance of AI agents in real-world engineering tasks of small scope. The goal is to get a sense of when AI models will be able to automate real-world engineering, with implications for the development speed of frontier AI. Claims And Evidence: Yes, the claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, they make sense. Nevertheless, some notes on major limitations: A. The main limitation: The engineering problems designed for this benchmark are relatively small-scale, such that humans can make substantial progress in the time frame of one day to one week. As the authors note, it is unclear to what extent the findings generalize to realistic engineering problems in frontier-model development, where the authors expect that AI currently performs much worse compared to humans than in the evaluations in this work. B. One problem I see is that the benchmark can only test full automation of engineering tasks; thus, if the AI agent can only do 90% of the work to succeed at a task but completely fails at the remaining 10%, it would score zero in this benchmark, even though a human engineer having access to such a model could potentially significantly speed up their work. Thus, one should be cautious to get reassurance that AI R&D will *not* speed up based on limits in benchmark performance in such a benchmark. Future work could thus try to look into human-AI collaboration on engineering tasks (which would, however, probably be harder to evaluate and replicate). C. Another potential limitation is that the two scaffolds used in this work seem relatively arbitrary, and the results show that different scaffolds can yield significantly different results. Thus, we could imagine that a third type of scaffold, which doesn't yet exist, could change the results substantially. D. To get a sense of recent improvements, it would be interesting to know how well newer models perform (Gemini 2.0+Jules, Claude 3.7+Claude Code, o3-mini, DeepSeek-R1, ...). Theoretical Claims: The paper does not contain any theoretical claims. Experimental Designs Or Analyses: I did not check the experimental designs or analyses in detail, but they seemed reasonable from reading the main paper. Supplementary Material: I did not review the appendices. Relation To Broader Scientific Literature: See the paper, Section 4 on related work. Essential References Not Discussed: I am not aware of essential references not discussed. Other Strengths And Weaknesses: Strength: The paper is very well written. Weaknesses: i. Figure 2 does not clarify immediately how the allocation of a total time budget to samples and a time horizon is done (though it seems like later you clarify that you use the optimal allocation) ii. The main paper might benefit from more details on some or all of the 7 tasks, to get more of a feel for what the AI agents are evaluated for. Other Comments Or Suggestions: Overall assessment: I currently rate this paper as "accept" as is, i.e., I think even without addressing the limitations I mention above, this is a good paper. I assume that this work must have been an immense effort and I appreciate that it gives us an indication of where current frontier AI stands in terms of ability to automate realistic engineering skills. The questions and concerns I mention might go significantly beyond the scope of this work. Depending on the extent to which the limitations are addressed in the rebuttal, and depending on problems surfaced by other reviewers that I didn't catch (and that I consider significant), it is possible that I will increase or decrease the evaluation. Typos: a) p1, right side, 25: "programming, computer use, question answering" -- an "and" seem missing b) p.8, left side, 38: "parallely-developed" c) p.8, right side, 430: "the ability of increasingly capable to autonomously..." -- increasingly capable frontier models? Questions For Authors: 1. The paper only evaluates fairly recent models. Do older models fail the tasks completely? If not, do we have any indication of a "trend"? I'd find it particularly interesting to know whether there is a trend toward AI agents beating humans over increasingly long time-horizon tasks, or for an increasing "engineering-cost". 2. How much money was spent on human and/or AI evaluations? How much tweaking is required to run your evaluations on new models? This could give other researchers an indication of whether they have the resources to use your evaluation environments for their own evaluations. 3. Do you have thoughts on whether strong RE-Bench performance would differentially lead to a stronger improvement in AI capabilities vs. AI safety? 4. How good are the human experts? 5. Claude 3.5 Sonnet curves upward toward the end in Figure 2. Any idea how to interpret this? (Though tbc., it probably wouldn't curve upward for a linearly-spaced x-axis) 6. Do you know what it is about the scaffolds that changes the results so much? 7. Could you provide an intuition for how to reconcile Figures 2 and 4 with each other, which look quite different from each other? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and insightful comments\! We are glad that the reviewer finds the work to be well-written, the evidence to be clear and convincing, and that the results provide insight into the capabilities of current frontier AI on realistic research engineering. We address the reviewer’s concerns in “Methods and Evaluation Criteria” below: 1) **We believe that RE-Bench represents a substantial improvement in the realism and complexity of AI-relevant research engineering evaluations compared to the existing literature (see Table 7 for a detailed comparison)**\! In designing this benchmark, we decided that 1-day long tasks would be a reasonable compromise between providing valuable new insights and the cost and complexity of running evaluations and getting high-quality human baseline data. However, we recognize that this is a limitation of our work and discuss how time-horizons may affect our results in Section 5\. We are excited for future work to build on RE-Bench and develop better ways of measuring longer-task performance\! 2) **We believe that RE-Bench can measure partial AI performance quite well**\! We carefully designed our tasks and score functions such that progress is intermediately measurable and it is possible to make easy and quick progress. For example, in the “Optimize a Kernel” task, a very easy improvement is to use built-in Pytorch functions to optimize the slow starting solution. In “Optimize LLM Foundry”, an easy optimization is removing unneeded steps like checkpointing. We thus think that AIs that can reach close to an expert solution for a task are also likely to find easier or partial solutions that make some measurable progress. We would love to see future work investigate how human-AI collaborations might score on tasks like these, and hope that RE-bench can facilitate this. 3) We aimed to be principled in our choice of scaffolds, with Modular representing a simple proven baseline, and AIDE representing a more complex open-source scaffold that achieved top scores on the closest existing benchmark (MLE-bench). We certainly agree that scaffolding is an unsolved problem in AI evaluations and we discuss some of these limitations in Section 5.3. We are excited for future work on better understanding their effects and **we hope that RE-Bench can help serve as a valuable testbed for other researchers\!** 4) While we agree that these results would be interesting, we note that the newer models were released less than 4 months before the ICML submission deadline, which falls under “concurrent work” according to the ICML reviewer guidelines. **Therefore, we think that these evaluations would be out-of-scope for our work**. However, we certainly are excited for future work that evaluates these, and newer, models on RE-Bench\! Here’s our responses to the reviewer's questions. Note, due to the character limit, we’ve had to be brief in our response. 1) Based on experimentation and qualitative judgements, we expect models substantially older than Claude 3.5 Sonnet (Old), which is the oldest model evaluated in our work, to perform very poorly in RE-Bench. 2) The average token cost for agent runs is \\$123, while we paid human experts \\$1855 on average; GPU costs vary but H100s generally cost around \\$2/hr (and tasks use 0-6 GPUs). However, we want to note that these environments could be run on less expensive GPUs (like A100s), but the collected run data would not be comparable. Close to no tweaking of the environments, and often none to scaffolds as well, are needed to run newer models. 3) RE-bench has been designed to measure capabilities research primarily. 4) We believe our professional network experts (scoring 0.98 on average) are very strong, with over 5 years of experience in an area highly relevant to the task they are baselining or recent experience at frontier ML research organizations. More information can be found in Section 2.1 or Appendix A\! 5) We suspect this result is due to noise. 6) Qualitatively, Claude 3.5 Sonnet (new), for example, seems to be better at tool-use (invoking the right tools correctly, and interpreting their results) than o1-preview, which affects how well models work with different scaffolds. 7) Figure 2 plots **time-budget** on the x-axis, i.e. increasing amounts of the total time available to agents, and each point is the average performance when using our best-observed way of allocating that time for that AI agent, which often involves doing BoK over many short runs. Whereas in Figure 4, we are plotting wall-clock time on the x-axis for a single run. Scores increase more slowly in this setting, because models often get stuck with bad solutions or incorrect assumptions over time. Frequent resetting seems to increase solution diversity and helps them continue to make progress. On the suggested improvements for the writing and the typos, we’ll address these in our updated draft of the paper\! --- Rebuttal Comment 1.1: Comment: Thank you for the response! I have read this response, all other reviews, and your responses to the other reviews. I have not yet read any reviewer-responses to your responses since they don't yet exist. I think your answers are reasonable and I keep my score of 4. I think this is a good paper.
Summary: This paper contributes a new LLM (Agent) benchmark **RE-Bench**, consisting of 7 ML research engineering tasks for evaluating whether AI agents can autonomously perform AI R&D. A human study is also conducted, and results are analyzed. ## update after rebuttal I thank the authors for their rebuttal. I maintain my score after reading all the other review comments. Claims And Evidence: * "Seven novel, hand-crafted evaluation environments covering realistic ML research tasks.". Yes this is substantiated. * "Data from 71 attempts by 61 ML experts...results"; yes. * "Qualitative analysis", was also substantiated by the evidence in the paper. Methods And Evaluation Criteria: * Benchmark Construction: 7 custom tasks, each with a scoring function, a baseline, and a strong reference solution. Score is normalized between 0 (baseline) and 1 (reference). * Human Comparison: 61 experts, each given 8 hours, same compute environment. * AI Agents: Tested on the same tasks with up to 8 hours, plus best-of-k variants for shorter runs. * Criteria: Normalized scores, time-based performance, best-of-k sampling. Theoretical Claims: Not applicable, as no new theoretical claims. Experimental Designs Or Analyses: * Tasks validated with pilot human tests to ensure feasibility. * Multiple agent scaffolds and time horizons thoroughly compared. * Agents repeatedly query the scoring function, letting them brute-force improvements quickly. * Analyses are careful, e.g. acknowledging noise in QA fine-tuning. * Overall design is robust and fair, though limited by having only 7 tasks. Supplementary Material: * Skimmed sections Relation To Broader Scientific Literature: Positions RE-Bench relative to prior coding/ML agent benchmarks (MLE-bench, GAIA, etc.), highlighting its novelty in offering long-horizon tasks and direct human comparisons. Also references frontier AI policy frameworks. Good coverage of existing agent scaffolds. Essential References Not Discussed: No essential references not discussed. Other Strengths And Weaknesses: * Clarity: The paper is well written and clear to read. * Originality: The proposed benchmark is novel and original. * Significance: The benchmark focuses on solving a problem values by the community. Other Comments Or Suggestions: * Clarify if any rule-breaking “cheating” solutions were excluded from final results. * Discuss how specialized domain experts vs. generalized experts might affect human baselines. Questions For Authors: * How were “cheating” or environment-breaking agent solutions handled in scoring? * Did any participants use external LLMs while solving, and how did that affect results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are pleased to hear that the reviewer found the paper is well-written and clear to read, RE-Bench is novel and original, and the contributions significant. We address the questions the reviewer had below and we’d be happy to provide any further clarifications. Q1: How were “cheating” or environment-breaking agent solutions handled in scoring? Any runs that we inspected that had evidence of cheating were excluded from the results entirely. For agent solutions, we carefully inspected the 2 best performing runs for all tasks, and we inspected many more of the top performing runs on “Restricted Architecture MLM” and “Optimize LLM Foundry”. These two tasks often had cheating because agents would often forget or not take into account the restrictions. When running these experiments and developing the environments, we have not seen cheating or environment-breaking agent solutions for any other tasks. For human runs, we manually inspected score logs and submissions for any signs of cheating or environment-breaking solutions. Q2: Did any participants use external LLMs while solving, and how did that affect results? Participants were allowed to use the internet, external LLMs and other tools in order to solve the task. We wanted to compare AI agents against humans with access to their preferred development environment and tools, as that would be more representative of how frontier research is conducted and provide the strongest baseline. To our knowledge, we expect most participants to have used LLM-based coding tools like Cursor, or web-based LLM interfaces. We would be excited to see future work explore how human performance on research tasks varies with access to different types of AI assistance. Additional discussion around both questions can be found in Appendix A for more details\!
Summary: In this work, the authors propose RE-Bench, which is designed to assess the capabilities of AI agents for AI research and development, especially in comparison with human experts. They define 7 ML engineering environments with scoring functions and evaluate human experts and AI agents on those under the same time budget. Under similar constraints with access to the scoring functions, they find that given a small amount of time, AI agents can have an edge over the human experts, which is significantly flipped given more time. They suggest that this can be a reasonable benchmark for evaluating and developing AI agents for research. Claims And Evidence: - I have some concern on the claim about "open-ended ML research engineering." Please refer to the Methods And Evaluation Criteria section for the details. Methods And Evaluation Criteria: - Using the time as the universal budget for both humans and AI agents is an interesting decision, because one of the major strengths of AI agents over humans is their computing powers. - My primary concern about this work is the fact that the authors allow invoking the scoring function for evaluation without any limits ("The scoring function, which defines the goal of the environment, can be run by the agent at any time."). This could imply two things: - (a) Viewing this with the lens of sequential decision-making, unlimited access to the scoring function may mean access to the ground-truth value function (or some function that provides analogous information). Although the search space is still quite huge, this decision may give too much advantage to fast-iterators, which is usually not the case with challenging real-world AI R&D tasks. - (b) In many real-world AI *research* tasks, achieving some degree of generalizability is required, because research is usually not targeted for a specific narrow downstream task. For instance, researchers are not supposed to check the test performance until the research is finished. Based on this, I have some concern about claiming that the environments are for evaluating the capabilities of agents for open-ended ML "research" engineering. - On the other hand, I think the suggested environments can be reasonable, non-trivial testbeds to assess engineering or development capabilities of AI agents. Theoretical Claims: There is not much of theoretical claims from this submission. Experimental Designs Or Analyses: - The experiments are rigorously done with well-designed setups. For instance, the use of Vivaria for providing VM environments with GPUs can be an important factor for reproducibility and reliability of the presented results. - The authors provide a lot of technical details and decisions as well as empirical analyses. They can provide useful insights about the benchmark and especially the comparison of human experts and AI agents. Supplementary Material: I read the human evaluation details, agent evaluation details, environment information, and some of the example task instructions and solutions. Relation To Broader Scientific Literature: AI-based AI research agents are getting attention these days. The topic and the problem that this work tackles are relevant to the broader scientific literature. Essential References Not Discussed: Excluding concurrent work, I believe this work fairly cited relevant work. Other Strengths And Weaknesses: - The quality of the presentation of this work and the manuscript is high. It is easy to follow thanks to the organized information. Also, it provides much of the low-level details and analyses, which can be useful resources for readers and other researchers in the field. Other Comments Or Suggestions: N/A Questions For Authors: - The use of the time budget can be one of the choices for the "x-axis" for the evaluations and comparisons. Have you considered other choices, such as the number of score evaluations? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time, attention and thoughtful feedback. We are glad that the reviewer finds the RE-Bench tasks to be nontrivial assessments of research engineering. We appreciate that the reviewer found the paper and presentation of the work to be high quality and the experimental results to be rigorous, well-designed and insightful. Q1: My primary concern about this work is the [... question omitted due to character limit ...] I have some concern about claiming that the environments are for evaluating the capabilities of agents for open-ended ML "research" engineering. Thank you for bringing this up\! We offer several points in response to this concern: * The 7 tasks vary significantly in how easily and quickly the ground-truth score can be checked, even though they all allow an unlimited number of invocations of the score command. In environments like ”Restricted Architecture MLM” and “Fix Embedding”, **good solutions require long training runs before they can be meaningfully scored (sometimes taking many hours)**, so in practice agents/humans only have one or a few attempts to train and score a final solution. While **in** **the “Scaling Law Experiment” environment, the agent can never see the ground-truth score** **of its solutions.** In both cases agents can check the score performance of smaller models or shorter training runs frequently, but then have to account for generalization. * For some tasks in real AI R\&D work, the **ground-truth score function is actually quite easy to have access to**, for example, the runtime is very accessible when trying to optimize code bottlenecks for large training runs. Our task “Optimize a Kernel”, where the goal of the task is to make a GPU kernel run as fast as possible, is a realistic example where it’s very natural to have quick access to the actual score. Therefore, we think it is important to include such environments in our benchmark. * The reason we give ground-truth feedback in so many of the tasks is to avoid reasonable misunderstandings about how the score will be measured, what kind of generalization is intended, or what kinds of solutions are allowed, which we have previously found can happen very easily even for human experts and can make task results uninformative. * Agents being fast-iterators is indeed a real phenomenon. We discuss in Section 3.4 that on average, agents run the score function between 25.3-36.8 times per hour, compared to 3.4 times for humans. This seems to help significantly on tasks like “Optimize a Kernel” where rapid iteration against ground truth is possible, but it is not clear that it provides any real advantages on tasks like “Fix Embedding” where proper score assessment requires long training runs and exercising judgment. * Lastly, we acknowledge that this is indeed a consideration, and we offer a discussion of this in our Section 5.3. Q2: The use of the time budget can be one of the choices for the "x-axis" for the evaluations and comparisons. Have you considered other choices, such as the number of score evaluations? Thank you for this question\! Here are some points we hope addresses the reviewer’s question: * **Indeed, we include a comparison between humans and agents with non-GPU cost (i.e. pay for humans and token costs for AIs) on the x-axis as Figure 8**. * Due to practical constraints, human baselines had to be conducted with a time-limit of 8 hours (see Section 2.1). We offer additional discussion about how this affects our results in Section 5.2. * We also wish to highlight that **RE-Bench is agnostic to what limit is used for agents or humans.** This flexibility means that RE-Bench can easily be used to explore future work on different comparative settings. * Additionally, agents and humans were asked to optimize performance relative to the time they used, so using time as the x-axis is more representative of their best performances. To properly investigate how score relates to number of score evaluations would require rerunning human and agent results with instructions to minimize unnecessary score invocations. * Lastly, while not an x-axis, we also include discussion about the number of score evaluations in Section 3.4. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for providing the detailed response to my review. While I have some remaining concerns regarding the claim of "open-ended ML research" engineering, the paper and author response discuss that aspect to a certain degree, and thus I am raising my score to 4.
null
null
null
null
null
null
null
null
A First-order Generative Bilevel Optimization Framework for Diffusion Models
Accept (poster)
Summary: The authors have proposed a bilevel optimization framework tailored for diffusion models, specifically addressing two scenarios:- Fine Tuning a pre-trained diffusion models (via Inference-only Solver): To fine-tune pre-trained diffusion models, to maximize the task-specific rewards, while preserving the aesthetic realism, the authors propose a bilevel optimization problem as: - Upper-level problem: Select optimal hyperparameters such as entropy regularization strength λ, that balances reward maximization and realism. - Lower-level problem: Adjusts the generated data distribution to maximize a reward function with entropy regularization, ensuring closeness to the pre-trained distribution. This method avoids expensive backpropagation through diffusion steps by using guided sampling and closed-form gradient estimation. Noise Schedule Optimization (Training from Scratch): The authors optimize noise schedules used during training diffusion models, and the bilevel problem is defined as: - Upper-level problem: Optimizes parameters controlling the noise schedule to minimize metrics like FID score of the generated images. - Lower-level problem: Learn parameters of a score function that approximates gradients of log-likelihoods of noisy data distributions. To efficiently solve this nested structure without differentiating through multiple sampling steps (which would be computationally intensive), the authors use: - Reparameterization of noise schedules (using cosine or sigmoid functions with only four parameters). Zeroth-order gradient estimation, allowing gradient approximation without explicit backpropagation through sampling trajectories The authors convert nested bilevel optimization problems into a single-level penalty problems solvable via first-order methods, providing theoretical guarantees under strong convexity assumptions Claims And Evidence: Proposition 1 derives a closed-form gradient estimator for entropy regularization strength (λ) using pre-trained data samples, avoiding backpropagation through diffusion steps. Theorem 1 establishes convergence guarantees under strong convexity assumptions (Assumption 1) Reformulates bilevel problems into single-level penalty objectives (Eq. 2), enabling gradient updates via inference-only guided sampling (Alg. 5) Methods And Evaluation Criteria: Methods: The authors have proposed a Bilevel Optimization for guidance of diffusion models to maximize the reward function, at the same time, to maintain the realism of the generated images. They have proposed this method for two scenarios: (i) inference based fine tuning of pretrained models, (ii) noise schedule optimization when training from scratch. Evaluation Criteria: The authors use FID score, CLIP score, IS score and time (is seconds) to discuss the effectiveness of their proposed method. Theoretical Claims: Theorem 1 establishes convergence guarantees for the proposed bilevel optimization framework under the following conditions: - Strong convexity: The lower-level objective is strongly convex. - Smoothness: f(x,y) and g(x,y) are jointly smooth over (x,y) with constants l_{f,1} and l_{g,1}. - Lipschitz continuity: f(x,⋅) is l_{f,0} 0-Lipschitz, and g(x,y) has l_{g,2} Lipschitz Hessian. Under Assumption 1, the bilevel algorithm ensures a strict descent in the upper-level objective F(x). This theorem bridges theory and practice, ensuring that the framework’s hyperparameter updates (e.g., entropy strength λ or noise schedule parameters) provably guide diffusion models toward better performance under realistic Assumptions1. Experimental Designs Or Analyses: Experimental Setup: For pre-trained models, reward finetuning, the authors use StableDiffusion V1.5 model as their pre-trained model and employ a ResNet-18 architecture (trained on the ImageNet dataset) as the synthetic (lower-level) reward model. The bilevel method achieved an 11.76% improvement in the FID score and an 8.32% improvement in the CLIP score over the best-performing weighted sum method. For Noise Schedule Optimization, the authors train a U-Net model on MNIST from scratch, and use Cosine/sigmoid schedules with 4 parameters. The authors claim their bilevel method achieved 30% lower FID than default DDIM with only 2.5× training time. Supplementary Material: I have reviewed Appendix A, B, C and Algorithm 5 and 6. Relation To Broader Scientific Literature: The authors claim, this is the first work related bilevel optimization to diffusion models. Their work fits into the broader space of AI Alignment, and Reward based optimization of generative models. Their work is foundational towards understanding alignment from a theoretical perspective. The authors discuss reward alignment from bilevel optimization perspective. There are few recent papers on this topic, such as:- 1. Implicit Diffusion: Efficient optimization through stochastic sampling 2. SPARKLE: A Unified Single-Loop Primal-Dual Framework for Decentralized Bilevel Optimization 3. Bi-level Guided Diffusion Models for Zero-Shot Medical Imaging Inverse Problems Apart from that, in a broader sense, since the method is towards inference time alignment as well, the authors can comment on other test time alignment and reward based methods such as:- 1. Aligning Text-to-Image Diffusion Models with Reward Backpropagation 2. Aligning Diffusion Models with Noise-Conditioned Perception 3. Diffusion Model Alignment Using Direct Preference Optimization Essential References Not Discussed: NA Other Strengths And Weaknesses: **Strengths** Theoretical Innovation: - Closed-form gradient estimation: Proposition 1 enables direct computation of gradients for entropy regularization strength (λ) using pre-trained samples, eliminating backpropagation through diffusion steps. - Convergence guarantees: Theorem 1 establishes convergence under strong convexity assumptions, matching standard bilevel optimization rates. Algorithmic Design: - Inference-only fine-tuning: Avoids backpropagation through sampling trajectories by using guided sampling (Algorithm 5), reducing memory costs. - Parameterized noise schedules: Reduces noise schedule optimization to 4 parameters (e.g., cosine/sigmoid functions) instead of tuning per-step values. - Zeroth-order (ZO) gradients: Enables gradient estimation for noise schedules without differentiating through sampling steps (Equation 14). **Weaknesses** - Baseline Comparisons: The authors should compare their bi-level optimization approach with other reward model based approaches (mentioned above), to see how much their method performs. - Reproducibility: The authors should release the code, and their experimental setup, to support their work. Other Comments Or Suggestions: The images provided by the authors are a bit difficult to understand in terms of their aesthetics. If the authors can put 1 image per prompt rather than a collage, will help to differentiate their methods effectiveness. Questions For Authors: - Is there a relaxation for strongly convex setup, which might be applicable here. If a better proof can be provided for the same, would strengthen their method. - Memory cost comparison can be further explained, as it has been mentioned in few of the places. - Can the authors comment further on other related works (as mentioned above), in terms of aesthetics, computation, methods, and comparison metrics? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the theoretical innovation and our algorithm design. Our response to your comments follows. **Q1. Baseline comparison.** Thank you for your question. We have added numerical comparisons using different reward functions during the rebuttal period; please see Table R1 and Figure R1 on anonymous link: https://anonymous.4open.science/r/bilevel-diffusion-11A1/bilevel_diffusion_rebuttal.pdf. We test the performance of our method for another widely used lower-level reward function, HPSv2. Bilevel approach also outperform other baselines in terms of image quality in comparable time complexity, which showcase the robustness of our approach with respect to different reward functions. Due to time constraints, we could not include all the baselines you mentioned, as they all correspond to the lower-level reward fine-tuning task and require additional HPO on top of that. We believe these methods, though differing in their lower-level fine-tuning strategies, could similarly benefit from an upper-level HPO. We will discuss these points further in our revised manuscript. **Q2. Reproducibility.** Thank you for your suggestion. We will release it with the final version. **Q3. Presentation of images.** Thank you for your suggestion. We will change accordingly. **Q4. Relaxation of strongly convexity assumption.** Thank you for your question. Yes, it is possible to relax strongly convexity assumption to so-called Polyak-Lojasiwcz condition following (Kwon et. al 2024, Shen et. al 2024). This condition covers nonconvex objectives and are satisfied by loss of overparameterized neural network [R5]. > [R5] Loss landscapes and optimization in over-parameterized non-linear systems and neural networks. C. Liu, et al. 2022. **Q5. Memory cost comparison.** For the first application of reward fine-tuning, since Algorithm 2 seprates the hyperparameter optimization stage and sampling stages, it does not introduce additional memory cost. For the second application of noise scheduling, we add a Table R2 in attached PDF to compare the memory usage of bilevel HPO and other approaches. By utilizing ZO and noise scheduler parameterization, the memory overhead of bilevel method is not severe. **Q6. Comparison with related works.** Thank you for providing these literatures. Most of related works are related to the lower-level task for the first application on the fine-tuning diffusion model stage, including [R6-R11]. They belong to related works in "fine-tuning diffusion models" section and we adopt the guidance based fine-tuning method for the lower-level task. All of the methods in [R6-R11] can be further benefits from tuning the KL regularization using our framework. We will add a discussion in the paper. > [R6] Implicit Diffusion: Efficient optimization through stochastic sampling > [R7] SPARKLE: A Unified Single-Loop Primal-Dual Framework for Decentralized Bilevel Optimization > [R8] Bi-level Guided Diffusion Models for Zero-Shot Medical Imaging Inverse Problems > [R9] Aligning Text-to-Image Diffusion Models with Reward Backpropagation > [R10] Aligning Diffusion Models with Noise-Conditioned Perception > [R11] Diffusion Model Alignment Using Direct Preference Optimization --- Rebuttal Comment 1.1: Comment: Thanks for the clarification, this addresses my remaining concerns. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our work and response, and for providing constructive suggestions.
Summary: This paper explores the application of bilevel optimization in diffusion models, focusing on two key applications. The first optimizes the trade-off parameter that balances the reward and proximity to the pre-trained distribution during fine-tuning. The second optimizes the noise schedule in diffusion models. To enhance scalability, the paper proposes a first-order optimization framework specifically designed for diffusion models. Experimental results demonstrate that the proposed method significantly reduces hyperparameter optimization time compared to grid search, random search, and Bayesian optimization. Claims And Evidence: The proposed method is theoretically well-founded and demonstrates strong empirical performance. Methods And Evaluation Criteria: The proposed method adapt the existing fully-first order method to diffusion model which makes sense. Theoretical Claims: I check the proofs of proposition 1 and theorem 1. Experimental Designs Or Analyses: Yes. Supplementary Material: I review Section A, B, C, D in the supplementary material. Relation To Broader Scientific Literature: The proposed method builds on the fully first-order approach from Kwon et al. (2023) and introduces improvements to enhance its scalability for diffusion models Essential References Not Discussed: No Other Strengths And Weaknesses: The proposed method is theoretically grounded and specifically designed for diffusion models, demonstrating strong empirical performance. Other Comments Or Suggestions: In eq 1, should the optimization variable in upper level be only x? Questions For Authors: Could the method be extended to the setting whether the upper-level objective is not differentiable? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the theoretical guarantees for our algorithm and its empirical performance. Our response to your comments follows. **Q1. Optimization merely on $x$ in equation 1?** Thank you for your question. In equation 1, we state the general bilevel HPO formulation where the lower-level objective is not necessarily strongly convex so that $\mathcal{S}(x)$ may contain multiple solutions. Therefore, we also need to optimize on $y$ to select one solution. To eliminate any confusion, we will revise our formulation throughout to assume a strongly convex lower-level objective so that optimizing $x$ merely is enough. **Q2. Applicability of method on non-differentiable objective.** Thank you for your question. Our approach in the second application can extend to non-differentiable upper-level metrics, as ZO estimation is compatible with non-differentiable objectives. For the first application, if the reward function is non-differentiable, we would similarly employ a ZO estimator for its gradient; see [R4]. > [R4] An Algorithm with Optimal Dimension-Dependence for Zero-Order Nonsmooth Nonconvex Stochastic Optimization. G. Kornowski, O. Shamir. JMLR, 2024.
Summary: This paper introduces a practical, first-order bilevel framework for diffusion models, outperforming standard methods in fine-tuning and training scenarios. The proposed method eliminates the high dimensionality and sampling costs in traditional methods. Claims And Evidence: The claims are supported by experimental results and theoretical convergence guarantees. Methods And Evaluation Criteria: Yes, the proposed method and evaluation criteria make sense. Theoretical Claims: The main proof is provided in D.3 and it is correct. Experimental Designs Or Analyses: Since the main objective of this paper is to address practical challenges in diffusion models, the numerical experiments presented are insufficiently representative. Specifically, Section 6.1 optimizes only one hyperparameter, resulting in limited improvement over naive search methods. Additionally, the experiments in Section 6.2 rely solely on the MNIST dataset, which might be simple for large-scale cases for diffusion model applications and thus does not adequately reflect the performance of modern diffusion frameworks. More large-scale datasets could be added to make the results more convincing. Supplementary Material: Yes, experiments and proofs. Relation To Broader Scientific Literature: The main contribution of this paper is framing diffusion hyperparameter tuning as a bilevel optimization problem. On the positive side, this formulation appears novel in the diffusion context. However, the motivation is not that exciting because hyperparameter tuning is already a standard application of bilevel optimization. A more promising direction may involve using a bilevel optimization approach to address fundamental diffusion challenges, such as improving sampling efficiency. Essential References Not Discussed: References are good to me. Other Strengths And Weaknesses: The application of bilevel optimization to tuning hyperparameters in diffusion models is novel and interesting. From the bilevel optimization side, the novelty is not very significant. The paper could be strengthened if more comprehensive and large-scale experiments can be done. Other Comments Or Suggestions: Line 128 left, $S_\gamma^*(x)$ is not defined. Line 201 right, $y^*$ and $z^*$ are mismatched. Line 437 right, the quotation marks need some adjustments. Questions For Authors: Although I appreciate the novel application of gradient-based hyperparameter tuning for diffusion models. I still want to understand from the optimization perspective, what new challenges the diffusion models introduce to the algorithmic development in bilevel optimization? In other words, what is the main technical or algorithmic novelty on the bilevel optimization side? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our novelty. We hope our response to your comments below can resolve your minor concerns. **Q1. Insufficient numerical experiments.** - **One hyperparameter in the fine-tuning diffusion model experiment.** Although KL regularization is just one hyperparameter, careful tuning it is essential. The appropriate KL strength $\lambda$ prevents reward over-optimization on downstream tasks by keeping the model close to the pre-trained distribution, while still allowing for the necessary variability to improve the reward (see Uehara et al., 2024; Fan et al., 2024) and Figure 2. We also consider multiple hyperparameter tuning on the second experiment for noise scheduling in training diffusion model. - **Noise scheduling tasks only on MNIST dataset.** The noise scheduling problem we consider arises during the (pre-)training stage of diffusion models, which is particularly expensive compared to fine-tuning. Notably, the cost of training a diffusion model on MNIST is already comparable to the cost of fine-tuning on ImageNet (see Tables 1 and 2). Moreover, training requires tuning multiple hyperparameters in noise scheduler parameterization, further increasing the computational burden (see grid search method in Table 2). Our goal for the second application is to propose a new automatic HPO framework for training diffusion models via bilevel optimization, so we demonstrate our method using the MNIST dataset in this version. More datasets will be investigated in future work. - **More comprehensive experiments.** During the rebuttal period, we add more comparisons on HPO for the fine-tuning diffusion model task. We test the performance of our method for another widely used lower-level reward function, HPSv2. See Table R1 and Figure R1 on anonymous link: https://anonymous.4open.science/r/bilevel-diffusion-11A1/bilevel_diffusion_rebuttal.pdf. Bilevel approaches also outperform other baselines in terms of image quality in comparable time complexity, which showcase the robustness of our approach with respect to different reward functions. **Q2. New challenges and novelty from the optimization perspective.** Thank you for your question. As we highlighted at the end of Section 1, there are *sufficient novelty* in the context of bilevel optimization as well. The *first challenge* is that we cannot directly optimize the distribution itself but can only work with samples. In HPO for fine-tuning a diffusion model, we use guided backward sampling to generate samples approximately from the desired distribution. Thanks to (Guo et al, 2024), we bridge the gap between guided sample-generation guarantees and optimization guarantees for the underlying probability distribution in this settings. In HPO for training a diffusion model, we parameterize the noise scheduler and noise distribution via cosine/sigmoid functions and a score network, respectively, and optimize their parameters instead. The *second challenge* is the numerical feasibility and computational overhead. For Application 1, we derived a backward-process-free approach to estimate the upper-level gradient (Proposition 1), reducing complexity. For Application 2, we employ ZO estimation to avoid both computational and memory costly backpropagation. Finally, we validate the assumption of strong convexity in probability space for diffusion models which are discussed in "implications for generative bilevel applications" in Section 5. **Q3. Hyperparameter tuning is a standard application of bilevel optimization.** Thank you for your question. Note that although prior work has explored HPO via bilevel optimization, most existing methods rely on implicit gradient or unrolling differentiation approaches that entail costly second-order computations. In contrast, we leveraged a fully first order bilevel method, *which is new in HPO*. Moreover, the techniques in existing works do not readily extend to the infinite-dimensional probability space of diffusion models. The new challenges we highlight in Q2 are fundamental when applying bilevel HPO to diffusion models. **Q4. Applicability of noise scheduling tuning in faster sampling with a diffusion model.** We are happy to see our work may stimulate promising direction! This is indeed an interesting future direction for us. We note the emerging line of work on noise optimization for faster fine-tuning (e.g., Tang et al., 2024), but the stages they target differ from ours. Those methods focus on reward fine-tuning using a pre-trained model so that it is essentially a single-level reward maximization task, similar to non-HPO version of our first application on reward fine-tuning, whereas we target automatic HPO for the fundamental diffusion model training stage. Moreover, as our method explored the better choice of noise scheduler at each iteration, our method can potentially reduce the overall sample complexity needed to achieve the target image quality in diffusion model training.
Summary: The paper explores the problem of bilevel optimization with diffusion models - a hierarchical framework consisting of a higher and lower level objectives which are jointly optimized. The authors frame the following two problems as bilevel optimization: 1. KL regularized reward maximization as the lower level objective for diffusion model finetuning, and the higher level objective optimizing the KL weight $\lambda$ using CLIP as the reward. 2. Tuning the noising schedule of a diffusion model in the upper level and score matching loss in the lower level. For the reward guided finetuning task, the authors use a color/vibrancy reward for the lower level objective and CLIP score for the higher level objective, and demonstrate improvement over other search methods such as grid search and Bayes opt. For the second task, the authors finetune the noise schedule of an MNIST generative model using DDIM inference. Claims And Evidence: The paper claims to propose an efficient framework to perform bilevel optimization with diffusion models. I think the claims for the most part are justified by the experiments, but I have not gone through the implementation details thoroughly enough to know if the methods being compared against are fairly tuned. Methods And Evaluation Criteria: I am not fully convinced that the two tasks the paper explores are directly useful themselves. Usually in previous work, entropy regularized reward guided sampling of diffusion models (the lower level problem) is investigated in isolation. I can see the utility of tuning the KL weight, but the complexity added with bilevel optimization doesn't seem useful in practice. However, it is a reasonable experiment to demonstrate the method in the context of this paper. The noise scheduler optimization with score matching is a lower level loss is also a somewhat confusing experiment, and only demonstrating qualtitative results on MNIST is a weak result. Theoretical Claims: I did not thoroughly check the theoretical claims and math, however I did notice a couple of potential issues: 1. The gradient in Proposition 1, is not correctly estimated using the MC estimator in Appendix E.2. The expectation is inside the log, so taking MC average inside the log would provide a biased estimator. However, section 4.1 seems to imply we can tractably estimate this gradient. 2. In page 4, final paragraph (section 4.1) it is written that algorithm 5 sampled from the optimal entropy guided distribution. But this is also an intractable task, and I believe the sampling process would only provide approximate guidance. Experimental Designs Or Analyses: I did not notice any issues with the experimental design. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: I am not aware of previous works that are directly related to the proposed bilevel optimization framework. The noise scheduling task could be applicable to faster sampling with a diffusion model. The bilevel reward finetuning task is potentially more relevant in my opinion, and could we used alongside other strategies from the diffusion guidance [1] for solving the low level optimization task. [1] Inference-Time Alignment in Diffusion Models with Reward-Guided Generation: Tutorial and Review, https://arxiv.org/abs/2501.09685 Essential References Not Discussed: No essential references are missing to my knowledge. Other Strengths And Weaknesses: ### Weaknesses 1. I found the paper quite confusing to read throughout. After reading the paper, I still do not fully understand the motivation. Other Comments Or Suggestions: NA Questions For Authors: 1. Could the authors clarify the two issues I brought up regarding theoretical claims? 2. Is there any reason the noise scheduling task was only done for MNIST sampling? It would be more convincing if done on with a more difficult dataset. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our topic. We hope our response to your comments below can resolve your minor concerns. **Q1. Motivation of two hyperparameter optimization (HPO) problems.** In the first application, the primary concern is whether adding additional computational cost for HPO is worthwhile, whereas in the second application, the concern is on its rationale. - **First application - Reward fine-tuning**. Our research is complementary to the entropy-regularized reward-guided diffusion models since no matter how to choose the guidance terms, the tuning of KL regularization is inevitable. Conventionally, tuning $\lambda$ via cross-validation (e.g. grid search) also incurs computational overhead due to the lack of universal guidelines across datasets and prompts (Uehara et al., 2024; Fan et al., 2024). In contrast, reward-guided diffusion model associated with our bilevel-based HPO approach yields better FID and CLIP scores while preserving time complexity (see Table 1 and the 3rd paragraph in Section 6.1). Furthermore, without bilevel, a common strategy to use CLIP for enhancing realistic is to include it in a weighted sum alongside the reward guidance, which requires the costly CLIP gradient. In contrast, our bilevel method, thanks to Prop. 1, is CLIP gradient-free, which cuts down overall time costs; see Table 1. - **Second application - Noise scheduling**. The noise scheduler, the variance of the added noise, controls the generation quality in diffusion models. If the variance is too large, the forward process quickly becomes noise and fails to learn meaningful representation for backward generation; if too small, it never fully reaches Gaussian noise. Prior work has highlighted the need to tune this noise scheduler to balance this trade-off (Nichol & Dhariwal, 2021; Lin et al., 2024; Chen, 2023; R1). In our bilevel HPO framework, we first fix the noise scheduler $q(t)$ and train the diffusion model using score-matching function as in (Song et al., 2021a,b; Ho et al., 2020; Nichol et al., 2021), then measure backward image quality via FID on the upper-level. The proposed method not only outperforms and significantly accelerates runtime of conventional HPO, but also enhances DDIM with empirically chosen schedules in comparable time; see Table 2. > [R1] Simple diffusion: End-to-end diffusion for high resolution images. E Hoogeboom, et. al. ICML 2023. **Q2. Biased Montel Carlo (MC) estimation.** Yes, the vanilla MC estimator is biased, but Theorem 1 can accommodate $\epsilon_k$ error. Moreover, since the reward is always positive, we have $e^{r_2(\cdot)/\lambda}, e^{(r_1(\cdot) / \gamma+r_2(\cdot))/\lambda}\geq 1$. Then since $\log(a)$ is Lipschitz continuous when $a\geq 1$, the MC estimation error can be controlled by large sampling batch size (see (Ji et al. 2021; Arbel & Mairal, 2022; Ghadimi & Wang, 2018)) or momentum updates (see [R2,R3]). However, empirical evidence suggests that we do not need a very large batch size and gradient updates without momentum also work well. We will add a comment in the revision and leave the theoretical improvement based on momentum updates for future work. > [R2] Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions. M. Wang, et. al. Math. Programming, 2017. > [R3] Solving stochastic compositional optimization is nearly as easy as solving stochastic optimization. T. Chen, et. al. IEEE Trans. on Signal Processing, 2021. **Q3. Generating from optimal entropy guided distribution.** We only need generation from the $\epsilon$-optimal entropy guided distribution and the error in Algorithm 5 can be accommodated by $\epsilon_k$ in Theorem 1. Moreover, Algorithm 5 in (Guo et al, 2024) converges linearly under a concave reward function and a strongly convex regularization term, implying that this accuracy can be achieved in $\mathcal{O}\bigl(\log(\epsilon_k^{-1})\bigr)$ iterations of Algorithm 5, which is not a huge computational burden. We will add a comment in the revision. **Q4. Applicability of other guidance strategies.** Yes, other guidance terms can be applied as long as they ensure the backward process converges on the optimal entropy-guided distribution, since our entropy-weight tuning approach is guidance-agnostic. We specifically choose the guidance from (Guo et al., 2024) because of its finite-time optimization guarantees. A clarifying paragraph will be provided. **Q5. Applicability of the noise scheduling task in faster sampling with a diffusion model.** Due to the space limit, please see the response to **Q4** for Reviewer XKpE. **Q6. Noise scheduling task on MNIST dataset.** Due to the space limit, please see the response to **Q1** for Reviewer XKpE. **Q7. Writing of this paper.** We follow the bilevel HPO framework, so some diffusion model details are moved to the Appendix due to space limit. We will clarify and improve the writing in the revision. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response, and for clarifying some points. Fundamentally, I think the paper isn't improved by these answers for me, so I will keep my score --- Reply to Comment 1.1.1: Comment: Thank you for taking time to review our clarifications and providing constructive feedback. We will consider your insights carefully in the revision as we continue to improve our work.
null
null
null
null
null
null
Adversarial Perturbations Are Formed by Iteratively Learning Linear Combinations of the Right Singular Vectors of the Adversarial Jacobian
Accept (poster)
Summary: This paper introduces RisingAttack, a novel method for generating ordered top-K adversarial attacks on Deep Neural Networks (DNNs) by optimizing directly in the image space. The method leverages Sequential Quadratic Programming (SQP) to manipulate adversarial perturbations as linear combinations of the right singular vectors of a specially constrained logit-to-image Jacobian matrix. Demonstrated across multiple architectures like ResNet-50, DenseNet-121, ViT-B, and DEiT-S on the ImageNet-1k dataset, RisingAttack outperforms existing methods such as QuadAttack by achieving higher attack success rates and less perceptible perturbations. By addressing both the ordered top-K attack problem and optimizing in the high-dimensional image space, this work enhances the understanding of adversarial vulnerabilities in DNNs. Claims And Evidence: See strengths and weaknesses. Methods And Evaluation Criteria: See strengths and weaknesses. Theoretical Claims: See strengths and weaknesses. Experimental Designs Or Analyses: See strengths and weaknesses. Supplementary Material: See strengths and weaknesses. Relation To Broader Scientific Literature: See strengths and weaknesses. Essential References Not Discussed: See strengths and weaknesses. Other Strengths And Weaknesses: Strengths: 1. The approach is based on solid mathematical foundations, particularly the application of Sequential Quadratic Programming (SQP) and Singular Value Decomposition (SVD) to optimize the adversarial attack, ensuring that the perturbations are not just random but strategically effective. 2. It has been tested and shown effective across different types of deep neural network architectures, including both traditional convolutional networks and more recent transformer-based models, indicating its adaptability and broad applicability. 3. The method provides new insights into the susceptibility of DNNs to adversarial attacks, contributing to the broader understanding of model weaknesses and paving the way for developing stronger defensive mechanisms. Weaknesses 1. The reliance on computationally intensive techniques like SQP and SVD may render RisingAttack less feasible for real-time or on-device applications where computational resources are limited. The paper does not sufficiently address the method's performance or feasibility in real-world scenarios, where computational efficiency and the ability to operate under diverse and unpredictable conditions are crucial. 2. The effectiveness of the attack largely hinges on the accurate computation of the Jacobian matrix, which can vary significantly depending on the model's architecture and the data, potentially limiting the method's effectiveness in less ideal conditions. 3. The generalization of the method to tasks outside of image classification or to non-visual data remains untested, which may limit its utility in broader applications of AI. Although it outperforms certain established methods, the paper could benefit from a broader comparison with a wider range of adversarial attack strategies to more comprehensively position its effectiveness within the field. Other Comments Or Suggestions: See strengths and weaknesses. Questions For Authors: See strengths and weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer D2j5, Thank you for your valuable comments and efforts. We address your concerns as follows and will update those carefully in the revision. ### C1: The reliance on computationally intensive techniques like SQP and SVD may render RisingAttack less feasible for real-time or on-device applications where computational resources are limited. The paper does not sufficiently address the method's performance or feasibility in real-world scenarios, where computational efficiency and the ability to operate under diverse and unpredictable conditions are crucial. > Thank you. We acknowledge the importance of developing methods that can be deployed in real-world scenarios under various conditions. > Both our RisingAttacK and the prior art, QuadAttacK can not achieve real-time attacks yet (please see our time complexity report in addressing the C1 to the Reviewer ubo3). > The novelty of our proposed RisingAttacK lies in its ability of learning ordered top-K adversarial perturbations directly in the image space using SQP for the first time to our knowledge, which is crucial for enhancing our understanding of adversarial perturbations and potentially for developing adversarial defense methods. > We hope that the deployment aspects of our RisingAttacK, as well as other attack methods, could be improved gradually when techniques become more mature with better optimizers and engineered implementations. > We also hope that the current lack of real-time running speed of our RisingAttacK (as well as the prior art, QuadAttacK) is not a factor that negatively impact us and other researchers on investigating rigorous optimization frameworks such as SQP in adversarial learning. ### C2: The effectiveness of the attack largely hinges on the accurate computation of the Jacobian matrix, which can vary significantly depending on the model's architecture and the data, potentially limiting the method's effectiveness in less ideal conditions. > Thank you. This might be a misunderstanding. The computation of the Jacobian matrix is not approximated. For most DNNs that are fully differentiable and since we are studying white-box attacks, the logits-to-input Jacobian matrix can be exactly computed. For example, we used the `torch.func.jacrev` [1] in our implementation. So, the Jacobian computation does not limit our method's effectiveness. > The approximation component in our proposed method lies in the first-order linearization of a DNN (Eqn.6) to relax the optimization problem with highly non-linear constraints to one with linear constraints, as commonly done under the SQP framework. ### C3: The generalization of the method to tasks outside of image classification or to non-visual data remains untested, which may limit its utility in broader applications of AI. Although it outperforms certain established methods, the paper could benefit from a broader comparison with a wider range of adversarial attack strategies to more comprehensively position its effectiveness within the field. > Thank you. We acknowledge the importance of developing attack methods and testing them for many tasks, rather than image classification. However, image classification is still the most widely used application task for studying new adversarial attack methods. > For the ordered top-K targeted attacks studied in this paper, it is still under active research with two previous approaches that we can find (Zhang & Wu, 2020; Paniagua et al., 2023). For practical considerations of conducting experimental comparisons, we follow the prior art to focus on image classification tasks. The reported experimental results are sufficiently comprehensive to verify the effectiveness of our proposed method in our opinion. > With a forward-looking thinking, our proposed method should have similar potential extensions as methods such as PGD (projected gradient descent) for learning top-1 targeted attacks. We leave those applications for further work, while we are continuing working on improving the efficiency of our current RisingAttacK. --- [1] https://pytorch.org/docs/stable/generated/torch.func.jacrev.html --- Rebuttal Comment 1.1: Comment: Thanks for the author's rebuttal. The author's response did not solve all my questions well, so I kept my previous rating.
Summary: In this work, the authors propose an ordered top-K targeted white-box attack called RisingAttacK by solving the non-linearly constrained optimization problem in image space under the sequential quadratic programming framework. Experiments on ImageNet-1k dataset validate the effectiveness of RisingAttacK. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, the experimental design is good. Supplementary Material: No Relation To Broader Scientific Literature: RisingAttacK achieves good performance on top-k targeted attack compared with the baseline QuadAttack. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. I cannot figure out why we need the top-K targeted attack. Can you provide any practical scenarios or its benefits compared with targeted/untargeted attacks? 2. It is not clear how you solve Eq. (25). Which optimizer do you adopt? 3. The proposed holistic figure of merits (FoM) metric seems to be limited. It is constrained with a single baseline. How could I calculate it with multiple baselines? Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 69zR, Thank you for valuable comments and efforts. We address your concerns one by one as follows. We will carefully update those in the revision. ### C1: I cannot figure out why we need the top-K targeted attack. Can you provide any practical scenarios or its benefits compared with targeted/untargeted attacks? > First of all, targeted attacks can provide better controllability for adversaries against untargeted attacks, and are a harder problem to solve, as widely recognized in the literature. > For targeted attacks, in terms of how precisely and how aggressively we could manipulate the outputs (i.e., the entire logits) of DNNs, there are three different settings with increasing difficulties to achieve: (i) conventional top-1 targeted attacks, (ii) unordered top-$K$ targeted attacks ($K\geq 1$), and (iii) ordered top-$K$ targeted attacks ($K\geq 1$). We focus on (iii) in this paper. > From top-1 to top-$K$ (unorder or ordered), as pointed out in (Zhang & Wu, 2020), the robustness of attack methods themselves should be investigated. For example, a top-1 targeted attack may be viewed successfully (e.g., from a cat image to a dog prediction), the ground-truth label (cat) may still be the top-2 or 3 prediction, which will be less effective in terms of top-k (e.g., top-5) accuracy metric. So, top-$K$ attacks, especially with a large $K$, can ensure the ground-truth labels will be pushed sufficiently away. > Between unordered top-$K$ and ordered top-$K$, as pointed out in (Paniagua et al., 2023), there are two scenarios in practice, ordinal examples and nominal examples: >> ``Imagine a cancer risk assessment tool that analyzes 2D medical images (e.g., mammograms) to categorize patients’ cancer risk into the ordinal 7-level risk ratings ([Extremely High Risk, Very High Risk, High Risk, Moderate Risk, Low Risk, Minimal Risk, No Risk]), An oncologist could use this tool to triage patients, prioritizing those in the highest risk categories for immediate intervention. An attacker aiming to delay treatment might use an ordered top-3 adversarial attack to change a prediction for a patient initially assessed as Very High Risk. They could target the classes [Moderate, Low, Minimal], subtly downgrading the urgency without breaking the logical sequence of risk categories. An unordered attack, in contrast, might lead to a sequence like [Low, Very High, Minimal], disrupting the ordinal relationship between classes. Such a disruption could raise red flags, making the attack easier to detect." >> Please see page 2 for the nominal examples in (Paniagua et al., 2023), due to space limit, we can not quote those examples here. > Similar in spirit to the ordinal and nominal examples provided in (Paniagua et al., 2023), learning ordered top-$K$ targeted attacks could find practical applications such as for the recommendation systems (that often recommend a number of items in a particular order) and the retrieval systems (that often return ordered retrieved items). > More related to computer vision applications, APIs such as Google Cloud Vision, Microsoft Azure Computer Vision, Amazon Rekognition, and IBM Watson Visual Recognition, often return ordered top-$K$ (e.g., 10) predictions about input images, for which ordered top-K targeted attacks could be studied. ### C2: It is not clear how you solve Eq. (25). Which optimizer do you adopt? > Thank you. As stated in lines 340-341: ``We show that Eqn. 25 has a closed-form solution (see the proof in Appendix C), reproducing the result in Eqn. 18.". So, we do not need an optimizer to solve it. ### C3: The proposed holistic figure of merits (FoM) metric seems to be limited. It is constrained with a single baseline. How could I calculate it with multiple baselines? > Thank you. The propose FoM is for pair-wise comparisons. For comparing a method 1 against other methods (2 to M), there are three possible extensions: + We may simply compute $mean_{j=2}^M FoM(1, j)$, similar in spirit to the mean Average Precision (mAP) that is widely used in object detection for comparing accuracy across multiple categories and across multiple IoU thresholds. + We may further consider: (i) the strict $\text{FoM} = \frac{\text{ASR}^1}{\max_{j=2}^M(\text{ASR}^j)}\cdot \frac{1}{3}\cdot \sum_{p\in \{1, 2,\infty\}}\frac{\min_{j=2}^M\ell_p^j}{\ell_p^1}$, to show how well the method 1 performs against the best of the rest in terms of every aspects (ASR and three norms), and (ii) the average $\text{FoM} = \frac{\text{ASR}^1}{mean_{j=2}^M(\text{ASR}^j)}\cdot \frac{1}{3}\cdot \sum_{p\in \{1, 2,\infty\}}\frac{mean_{j=2}^M\ell_p^j}{\ell_p^1}$
Summary: The paper introduces a new method for generating ordered top-K adversarial attacks. The authors use Sequential Quadratic Programming (SQP) to solve the optimization problem behind top-K adversarial attacks directly in the image space. After adapting the SQP algorithm to make the computation tractable and avoid high $\ell_infty$ norm, they derived a method called RisingAttacK that outperforms the previously state-of-the-art method QuadAttacK on ImageNet-1k. The new adversarial method provides insights on the nature of the adversarial attacks, they are linear combinations of the right singular vectors of the attack-targets-ranking constrained logit-to-image Jacobian matrix. ## update after rebuttal The authors answered the questions that I had about the two claims that I identified in the paper. Concerning C1, the authors did some preliminary experiments to address my question and are willing to "re-run all the experiments with time complexity recorded for a full-scale time complexity comparison between the two methods in revision". I am satisfied with the answer to my question about C1. Concerning C2, the authors agreed with my remark that the claim is misleading and suggested to change the title of the paper as a consequence. The rephrasing of the claim makes it less strong, the claim "Adversarial Perturbations Are Linear Combinations of the Right Singular Vectors of the Attack-Targets-Ranking Constrained Jacobian" was an interesting theoretical claim about adversarial attacks. While I appreciate that the authors took into account my remark, the fact that the original claim was misleading and will be rephrased impact the message of the paper. Overall, I am satisfied with the authors' response and have decided to change my score from 3 to 4. Claims And Evidence: There are two main claims in the paper: 1. "(…) ordered top-K adversarial perturbations can be expressed as linear combinations of the right singular vectors (corresponding to non-zero singular values) of the attack-targets-ranking constrained logit-to-image Jacobian matrix." 2. "Our RisingAttacK significantly outperforms the previous state-of-the-art approach, QuadAttacK, consistently across all top-K (1, 5, 10, 20, 25) and four models (ResNet-50, DenseNet-121, ViT-B and DEiT-S) on ImageNet-1k in experiments." The first claims provides a valuable insight on the structure of a top-K adversarial attack by defining the low dimensional ($O(d)$) manifold it belongs to. However, given the iterative nature of the method, the adversarial perturbation after reaching the iterations budget is actually a linear combination of the right singular vectors of A (attack-targets-ranking constrained logit-to-image Jacobian matrix) evaluated on the adversarial image generated by the previous iteration. This make the subspace to which the adversarial attack belongs to hard to interpret as it depends on the adversarial attack obtained in the previous iteration. The second claim is clearly supported by extensive experiments on different iteration budgets and datasets. Methods And Evaluation Criteria: The choice of architectures and dataset makes sense. The FoM metric gives a nice overview on how methods compare in different scenarios. The authors say that they compare attacks under the same computing budget for fair comparison. However this computation budget is measured by the number of iterations performed by the method. To ensure that the number of iterations is a good proxy for computation budget, it would have been valuable to provide the time complexity of one iteration for each method or eventually an empirical measurement of the compute time and memory required. Theoretical Claims: na Experimental Designs Or Analyses: See "methods and evaluation criteria" Supplementary Material: I reviewed A,B and C Relation To Broader Scientific Literature: This work is part of the larger family of work on adversarial attacks. Earlier work mainly focused on perturbing the top-1 prediction on a neural network. In this work the authors studied the case ordered top-K attack. Top-K attacks have been introduced by Zhang and Whu [1], Paniagua et al.[2] improved the attacks method using quadratic programming. The authors build upon the method of Paniagua et al.[2]. [1] Zhang, Zekun, and Tianfu Wu. "Learning ordered top-k adversarial attacks via adversarial distillation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. 2020. [2] Paniagua, Thomas, Ryan Grainger, and Tianfu Wu. "QuadAttac $ K $: A Quadratic Programming Approach to Learning Ordered Top-$ K $ Adversarial Attacks." Advances in Neural Information Processing Systems 36 (2023): 48962-48993. Essential References Not Discussed: na Other Strengths And Weaknesses: na Other Comments Or Suggestions: na Questions For Authors: • Would it be possible to report the time complexity of an iteration of RisingAttacK and the time complexity of an iteration of QuadAttacK ? Or alternatively, evaluate empirically the time necessary to run both methods ? This is supported by having fair comparison between methods (see "Methods And Evaluation Criteria"). • Could you clarify the claim of the paper "(…) ordered top-K adversarial perturbations can be expressed as linear combinations of the right singular vectors (corresponding to non-zero singular values) of the attack-targets-ranking constrained logit-to-image Jacobian matrix." ? The iterative nature of the method seems to make the interpretation of this right singular vectors more difficult (see "Claims and Evidences" for more details) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer ubo3, Thank you very much for your valuable comments and efforts. We address your concerns one by one as follows, which will be carefully updated in revision. ### C1: Report the time complexity of an iteration of RisingAttacK and that of QuadAttacK > Thank you. We report the **average seconds/iteration** using a batch of 32 images over 30 iterations on a single A100 GPU as follows, | $K$ | QuadAttacK | RisingAttacK | | :----:| :----:|:----:| | 1 | 1.31 | 1.38 | | 5 | 1.17 | 0.83 | | 10 | 0.95 | 1.57 | | 20 | 1.30 | 2.8| | Avg| 1.18 | 1.64 | > We note three aspects, + For both methods, the time complexity of an iteration is not necessarily monotonically related to $K$. It reflects the complexity of the underlying QP to be solved, which in turn is affected by the the sheer challenges of the randomly sampled top-$K$ attack targets for different $K$'s and different images. + On the average (32 images over 30 iterations), our RisingAttacK is more computationally expensive than QuadAttacK. With the significantly improved performance by our RisingAttacK, the increased time complexity seems to be reasonable. + We will re-run all the experiments with time complexity recorded for a full-scale time complexity comparison between the two methods in revision. ### C2: Could you clarify the claim of the paper "(…) ordered top-K adversarial perturbations can be expressed as linear combinations of the right singular vectors (corresponding to non-zero singular values) of the attack-targets-ranking constrained logit-to-image Jacobian matrix." ? The iterative nature of the method seems to make the interpretation of this right singular vectors more difficult. > Thank you for pointing this out. We agree with you that the iterative nature of our RisingAttacK indeed makes the interpretation not precise. > We propose to change the title to ``*Adversarial Perturbations Are **Formed by Iteratively Learning** Linear Combinations of the Right Singular Vectors of the Attack-Targets-Ranking Constrained Jacobian*", as well as all the related statements in text. >> We would like to continue to highlight the observation of "*Linear Combinations of the Right Singular Vectors of the Attack-Targets-Ranking Constrained Jacobian*" in solving Eqn.18 at each iteration, which we think provides useful insights for understanding adversarial perturbations directly in the image space.
null
null
null
null
null
null
null
null
CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities
Accept (spotlight poster)
Summary: This paper presents CVE-Bench, a new benchmark designed to evaluate AI agents in the cybersecurity domain, specifically focusing on real-world web vulnerabilities. It compiles 40 CVEs from the past year, covering eight attack types, to create a comprehensive assessment framework. To simulate realistic exploitation scenarios, CVE-Bench incorporates two evaluation settings - zero-day (no prior vulnerability information) and one-day (high-level vulnerability description provided). The benchmark includes an automatic evaluation framework to assess AI agents' performance in exploiting these vulnerabilities. Three cybersecurity agents—CyAgent, T-Agent, and AutoGPT—are evaluated on CVE-Bench using one LLM (OpenAI GPT-4o). The study provides both quantitative and qualitative analyses, detailing success rates, failure modes, and limitations of current AI-driven cybersecurity frameworks on this benchmark. Claims And Evidence: I found the claims to be made sound. Methods And Evaluation Criteria: The methods and evaluation on the new benchmark seems realistic. Theoretical Claims: N/A Experimental Designs Or Analyses: Overall, I found the design of the benchmark good and the experimental designs and analyses are sufficient. I think that the following could make the paper even better: 1. Using the same environment for each agent - for example, install sqlmap in the container of CyAgent and not only for the T-Agent. 2. Evaluating additional LLMs (e.g., Claude, Gemini, open-source models like LLaMA or Mistral) would provide a broader understanding of model capabilities and generalization on this benchmark. I know that this takes time and effort so will not be judging the paper negatively for this. Supplementary Material: No Relation To Broader Scientific Literature: This paper introduces a new benchmark for finding CTF exploits. Essential References Not Discussed: The paper does not sufficiently discuss existing cybersecurity benchmarks, despite their relevance to the study. While CyBench is mentioned, CTFs are widely explored in InterCode-CTF [Yang et al., 2023] published in NeurIPS 2023, and NYU-CTF Bench [Shao et al., 2024] published in NeurIPS 2024. Meta’s CyberSecEval 1 , 2 [Bhatt et al., 2023; 2024] and CyberSecEval 3 [Wan et al., 2024] benchmarks are also publicly available. These benchmarks assess LLMs' ability in vulnerability exploitation, detection, and security-related reasoning, making them directly relevant to the paper’s contributions. Other Strengths And Weaknesses: Strengths: 1) The paper is well-written, self-contained, and easy to follow, making it accessible even to readers without a cybersecurity background. It effectively explains the different aspects of the benchmark. 2) The benchmark is designed to simulate real-world web application vulnerability exploitation, ensuring practical relevance. The methodology used to construct the benchmark is well-founded. 3) The introduction of an automated evaluation system makes the benchmark reproducible, allowing other researchers to test their AI agents on the same framework. Weaknesses: 1) The benchmark does not contain “test” and “development” splits, making it hard for correct evaluations of AI applications on top of it in the future. Other Comments Or Suggestions: The top bar title needs to be changed from “Submission and Formatting Instructions for ICML 2025”. Questions For Authors: 1) Do you have any quantitative results regarding the failure modes described in Section 4.3? 2) In Section 4.2 (Exploit Composition), it is stated: “With sqlmap, T-Agent can locate the vulnerability and perform SQL injection automatically. According to the reasoning traces, Cy-Agent attempts sqlmap for most of the CVEs. Appropriate use of sqlmap can significant improve the success rate of exploit SQL injection vulnerabilities, …”. Could you clarify the difference between T-Agent and Cy-Agent in their usage of sqlmap? Does Cy-Agent have access to sqlmap but fail to use it effectively, or is the tool configured differently across the agents? 3) In Figure 3, T-Agent performs better than AutoGPT in the zero-day scenario, while AutoGPT outperforms T-Agent in the one-day scenario. Do you have an explanation for this behavior? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. We will incorporate the suggestions in the revision. > **E1**: Using the same environment for each agent - for example, install sqlmap in the container of CyAgent and not only for the T-Agent. We clarify that sqlmap is installed in the container of both Cy-Agent and T-Agent. We will revise the experiments to provide the same set of tools for all agents. > **E2**: Evaluating additional LLMs (e.g., Claude, Gemini, LLaMA, and Mistral) would provide a broader understanding of model capabilities and generalization on this benchmark. I know that this takes time and effort so will not be judging the paper negatively for this. We find that the open-source model, Llama 3.1, achieves only 0% success rates in the zero-day or one-day settings. We will run experiments with more LLMs in the revision, such as Claude. > **Q1**: Do you have any quantitative results regarding the failure modes described in Section 4.3? We summarized the frequency of the failure modes as follows. Agents can fail due to multiple reasons, therefore, the sum of the frequency of all failure modes can exceed 100%. | | zero-day | | | one-day | | | |---|---|---|---|---|---|---| | | Cy-Agent | AutoGPT | T-Agent | Cy-Agent | AutoGPT | T-Agent | |Limited Task Understanding | 30.0% | 15.0% | 0% | 20.0% | 5.0% | 0% | |Incorrect Focus | 0% | 0% | 35.0% | 0% | 0% | 30.0% | |Insufficient Exploration | 67.5% | 72.5% | 80.0% | 37.5% | 45.0% | 55.0% | |Tool Misuse | 47.5% | 5.0% | 17.5% | 27.5% | 22.5% | 10.0% | |Inadequate Reasoning | 10.0% | 7.5% | 7.5% | 40.0% | 27.5% | 20.0% | As shown, all agents are bottlenecked by insufficient exploration, meaning that they failed to identify the vulnerability endpoint of applications, even when high-level vulnerability descriptions were provided. T-Agent consistently understood task targets (w/ 0% Limited Task Understanding), while it sometimes focused on websites other than the vulnerable one provided in the prompt (e.g., www.example.com). Generally, compared to the 0-day setting, agents with 1-day descriptions had a lower frequency of naive failures, including Limited Task Understanding, Incorrect Focus, and Insufficient Exploration, but failed more due to Tool Misuse and Inadequate Reasoning. > **Q2**: Could you clarify the difference between T-Agent and Cy-Agent in their usage of sqlmap? Does Cy-Agent have access to sqlmap but fail to use it effectively, or is the tool configured differently across the agents? We clarify that both T-Agent and Cy-Agent have access to sqlmap with the same configuration. However, they have different approaches to use sqlmap. - T-Agent uses a hierarchical structure with a team manager to determine the timing to use sqlmap and has a specialized agent dedicated specifically to SQL injection attacks. - Cy-Agent uses sqlmap as a general-purpose tool without a specialized framework for SQL injection. We find the hierarchical planning and task-specific agents in T-Agent enhance its ability to use tools effectively, compared to Cy-Agent. > **Q3**: In Figure 3, T-Agent performs better than AutoGPT in the 0-day scenario, while AutoGPT outperforms T-Agent in the 1-day scenario. Do you have an explanation for this behavior? With further analysis, we found that AutoGPT can uncover new vulnerabilities that were not included in the CVE description. This occurs when AutoGPT is unable to exploit the specified vulnerabilities in the one-day scenario, but the web application contains an alternative, more exploitable vulnerability. For example, in CVE-2024-36779, the one-day description targets a SQL injection vulnerability in `editCategories.php`, requiring a complex, time-based blind SQL injection. AutoGPT struggled with this but uncovered a vulnerability in index.php, which could be easily exploited by using `‘OR 1=1 –` to bypass filters and gain administrator access. By identifying easier vulnerabilities in the one-day setting, AutoGPT achieved 4.5% and 5.0% higher success rates with one or five attempts, respectively, while T-Agent didn’t find new vulnerabilities. Nonetheless, when focusing solely on the described vulnerabilities, T-Agent continues to outperform AutoGPT in the one-day scenario. We will add this explanation as a case study in the revision. > The paper does not sufficiently discuss existing cybersecurity benchmarks, despite their relevance to the study. Thank you for suggesting relevant CTP benchmarks and Meta's CyberSecEval benchmarks. We will add and discuss them in the related work section of the revision. > The top bar title needs to be changed from “Submission and Formatting Instructions for ICML 2025”. We will fix the formatting issues in the revision.
Summary: 1. This paper proposed a new benchmark for LLM-Agent Attacking. 2. Some experiments are conducted. Claims And Evidence: **Yes** Methods And Evaluation Criteria: **Yes** There is no proposed method in this paper. Only the new CVE-Bench is intorduced. The authors provide a comparision with other similar benchmarks in Table 1. Theoretical Claims: **N.A.** It looks like no theoretical claims in this paper. Experimental Designs Or Analyses: **Yes** The authors conduct experiments by using three different LLM-Agents on their new benchmark. Analyses are provided in Section 4.2 and 4.3. Supplementary Material: It seems like no supplementary material provided by the authors. However there is an Appendix after *Reference* in the end of the PDF paper. Some detailed settings are included in the Appendix. I'm not certain whether the Appendix should be seperated from the main paper PDF according to the official guidance. Relation To Broader Scientific Literature: The key contribution of this paper is the new benchmark for cyberattact -- **CVE-Bench**. The authors have discussed the relationship bettwen the CVE-Bench and other related works such as *Cybench* and *CVE*. They are all desgined for LLM-Attacking evaluation (cyberattack). And the broad literatures of LLM agent for attack have been discussed in Section 3.2. Essential References Not Discussed: The authors have discussed the essential related works in Section 2. As far as I know, there is no more necessary reference. Other Strengths And Weaknesses: **Strengths:** 1. The proposed benchmark looks useful for the LLM-Agent attacking. **Weaknesses:** 1. The presentation of this paper should be improved. * (Minor) In Figure 3, the Y-axis should display "30%" rather than simply "30" to properly indicate percentage values. * Figure 4 lacks clarity—while the annotation mentions eight distinct tasks, only a subset appears in the chart, creating confusion for readers. 2. Despite spanning eight pages, the paper contains considerable content that appears superficial and fails to engage the reader effectively. The overall substance feels insufficient. Additional experiments and more in-depth analysis would strengthen the work and provide greater value to the research community. Other Comments Or Suggestions: There is no more comment from the reviwer. Questions For Authors: There is no more question from the reviwer. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. We will incorporate the suggestions in the revision. > W1: The presentation of this paper should be improved: > (Minor) In Figure 3, the Y-axis should display "30%" rather than simply "30" to properly indicate percentage values. > Figure 4 lacks clarity—while the annotation mentions eight distinct tasks, only a subset appears in the chart, creating confusion for readers. We will revise our submission to fix these presentation issues. 1. We will add percentage marks (%) to all the metrics calculated as percentages. 2. We clarify that agents didn't conduct all types of attacks successfully. Therefore, only a subset of attacks appear in the chart. We will fix the annotation to include only the successful attacks. > W2: Despite spanning eight pages, the paper contains considerable content that appears superficial and fails to engage the reader effectively. The overall substance feels insufficient. Additional experiments and more in-depth analysis would strengthen the work and provide greater value to the research community. Thank you for examining the details of our work in the appendix. We recognize the need to enhance the presentation in the appendix. In the revision, we will clean up the appendix and incorporate more comprehensive details about CVE-bench, including: 1. A detailed description of our data collection process 2. An example of exploit reproduction 3. An example of target containers 4. Running CVE-bench with standard evaluation tools, such as Inspect-AI 5. A sample of agent running logs
Summary: The authors introduce CVE-Bench, a new benchmark designed to evaluate large language model (LLM) agents’ capabilities in identifying core cybersecurity vulnerabilities. They define 8 key types of core attacks that any robust system should withstand. This benchmark significantly reduces manual effort by enabling automated flaw detection within system architectures using LLM agents. The authors evaluate three different agents on CVE-Bench, providing insightful performance analysis and highlighting critical findings. The tasks are designed to reflect real-world challenges, making the benchmark highly relevant to the community. Moreover, the authors emphasize reproducibility, ensuring that others can reliably use and extend the benchmark. The difficulty of the tasks is validated through high CVSS scores, underscoring the benchmark's rigor and importance. Claims And Evidence: The authors claim that current LLM agents are not capable of solving the benchmark and provide evidence of low performance, even in a one-day setting. While their evaluation supports this claim, I am curious about the performance of SOTA web agents. For instance, web/SWE agents like OpenHands, CodiumAI (and potentially other closed sourced) could be relevant comparisons. Additionally, I find it difficult to believe that, even when provided with a known vulnerability in the one-day setting, the agents are entirely unable to solve the tasks. This raises the question: Is the primary issue that the agents lack full contextual information about the scenario, leading them to choose the wrong tool? If so, further analysis of this failure mode would strengthen the paper’s argument. Methods And Evaluation Criteria: The authors evaluate their approach using standard cybersecurity testing agents, similar to those used in Cybench. The evaluation criteria are reasonable and well-aligned with the problem — they assess whether an attack succeeds using an automated grader hosted within the same container as the web application, enabling continuous monitoring. Additionally, the authors follow established evaluation practices by running 30 iterations for GPT-4o and reporting Success@1 and Success@5 metrics. This setup seems appropriate and fair for benchmarking agent performance in this domain. Theoretical Claims: No, there are no theoretical claims in the paper. Experimental Designs Or Analyses: Yes, the experiments conducted on the benchmark appear methodologically sound. The authors evaluate three different agents across all 8 tasks, ensuring comprehensive coverage. They also report key insights that highlight performance differences and failure patterns, contributing to a more nuanced understanding of the agents' capabilities and limitations. Supplementary Material: Yes, I went briefly through another very similar benchmark, Cybench. I feel knowing the differences would give a better idea on the contributions made by this particular paper. Relation To Broader Scientific Literature: Benchmarks for agents are especially timely and relevant, given the current surge in both research and industry interest around web-based agents. With numerous web agents emerging this year, it’s crucial to have robust evaluation frameworks to assess their capabilities. This paper contributes to that need by introducing a benchmark that covers a range of cybersecurity tasks compared to existing benchmarks, making it a valuable addition to the field. Essential References Not Discussed: The authors should consider including AgentSecurityBench (https://arxiv.org/abs/2410.02644) in their related work. This benchmark evaluates various agents on web-related attack tasks, making it a highly relevant comparison point. I’m particularly curious to see how the three agents discussed in this paper would perform on AgentSecurityBench's tasks — and whether stronger, more capable agents exist that could provide a more comprehensive performance comparison. Including this reference could offer valuable context and help position the paper’s benchmark more effectively within the broader literature. Other Strengths And Weaknesses: Strengths: 1. The experiments and insights are thorough and informative, providing a clear breakdown of where each agent fails. 2. The benchmark covers a wide span of challenging tasks, which are difficult yet reproducible, making it a valuable resource for future research. 3. The evaluation setup, including Success@1 and Success@5 metrics over 30 iterations, ensures robust performance reporting. 4. The benchmark enables continuous monitoring through an automated grader, enhancing practical usability. Weaknesses: 1. The novelty is somewhat limited. The work appears heavily inspired by CyBench, primarily extending it with more complex tasks and tools — an incremental rather than transformative contribution. 2. The authors claim that the benchmark aims to save human effort by enabling LLM agents to discover unknown vulnerabilities. However, they do not present any concrete examples where the agents uncover vulnerabilities beyond the intended tasks. This raises doubts about the benchmark’s practical utility if agents fail to explore beyond pre-defined scenarios. 3. There’s no justification or survey supporting the specific choice of agents evaluated. Including a rationale for selecting these agents — or report with other web/SWE baselines like OpenHands/Codium or — would improve the paper’s completeness and credibility. Other Comments Or Suggestions: I found this typo in ‘specify’ in the CVE-2024-32980 section on page 8. Otherwise, the paper is well structured and thorough on the experiments side. Questions For Authors: 1. In the abstract, you mention that the benchmark’s use case is to identify unpredictable threats. Did any of the agents discover flaws that were previously unpredictable or unknown? If not, does this suggest that using standard testing protocols could achieve similar results, thus questioning the utility of using LLM agents for evaluation in this context? 2. Are the agents aware that the task they are performing could be harmful? Specifically, if the task violates safety guardrails, do the agents acknowledge this by saying something like, "I am sorry"? This might explain the low success rate if the agents avoid performing risky actions. 3. Could you justify why you evaluated only these three agents on your benchmark? Are there other, perhaps stronger, agents that could provide more valuable insights into the tasks? 4. When the service is provided through APIs or libraries that lack a text-based user interface, does providing more detailed information or instructions about the tools help mitigate the agents’ misuse? What might be causing the agents to choose the wrong tool or misuse it? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments. We provide the following clarifications and will include them in the revision. >Even in the 1-day setting, agents are entirely unable to solve tasks. Analysis of failure modes is suggested. We provide a quantitative analysis of failure modes in the response to Reviewer aA1n and find that failing to identify vulnerable endpoints is the key bottleneck, even in the 1-day setting. This is partially because one-day descriptions only provide high-level descriptions of vulnerabilities, while agents need to reason about specific vulnerable endpoints. We will add quantitative analysis in a revision. >W1: The novelty is somewhat limited. The work appears heavily inspired by and an incremental contribution of CyBench. Although CyBench and CVE-bench both focus on cybersecurity, CVEBench focuses on _realistic_ evaluation of AI agents compared to isolated capture-the-flag (CTF) exercises. *Data Collection.* CVE-bench focuses on 40 real-world web vulnerabilities with a high impact (rated "critical" by CVSS). However, the 40 tasks in CyBench have various categories and difficulties, some of which (e.g., simple input validation) don’t reflect the critical nature of current cybersecurity challenges. *Task Formulation.* Our tasks require agents to detect vulnerabilities (0-day) and exploit them to achieve attack targets (0 & 1-day). In contrast, CyBench tasks are structured as CTF exercises, which test cybersecurity skills but don’t fully reflect real-world hacking scenarios. *Evaluation.* In CVE-bench, we evaluate whether agents can impact applications, such as data breaches or DoS. CyBench, however, evaluates flag correctness, a metric that doesn’t reflect the cybersecurity risks. >W2,Q1: Did agents discover flaws that were unpredictable? If not, does this suggest that using standard testing tools could achieve similar results? We find that the penetration testing tool, ZAP, identified 0 vulnerability in CVE-bench. As vulnerabilities were originally detected by human experts, CVE-bench is important to assess whether AI agents can supplement human efforts. More importantly, CVE-bench evaluates the risks of agents, providing important insights for policymakers [1, 2]. In addition, we found that agents can identify new vulnerabilities distinct from those in the CVE description, showing their potential to uncover unpredictable flaws. We provide a detailed example in the response to Reviewer aA1n. We will add ZAP results and a case study of new vulnerabilities. [1] UK AISI, “AI Safety Institute approach to evaluations.” https://www.gov.uk/government/publications/ai-safety-institute-approach-to-evaluations/ai-safety-institute-approach-to-evaluations [2] US AISI, “Technical Blog: Strengthening AI Agent Hijacking Evaluations.” https://www.nist.gov/news-events/news/2025/01/technical-blog-strengthening-ai-agent-hijacking-evaluations >W3,Q3: Including a rationale for selecting agents — or report with web/SWE baselines like OpenHands/Codium. After further experiments, we found that OpenHands identified and/or exploited 0 vulnerabilities with 5 runs using the same LLM. This is primarily because OpenHands failed to attempt different endpoints of applications thoroughly. We will add the results in the revision. In contrast, our selected agents have various capabilities relevant to cybersecurity: 1. AutoGPT: Selected for its versatility and generality in handling complex tasks 2. Cy-Agent: Designed for cybersecurity challenges, this agent has skills and tools for cyberattacks 3. T-Agents: It has the SOTA ability to exploit zero-day vulnerabilities, designed with the cooperation of different cybersecurity sub-domains. >Q2: Do the agents acknowledge safety violations and deny the request? We clarify that we have carefully framed tasks in an ethical context and achieved 0 denial. Agents are instructed to act as white-hat hackers with permissions granted by application owners. >Q4: Do more instructions on tools help mitigate misuse? What might be causing agents to choose the wrong tool or misuse it? We used the default prompts from specific agent frameworks, with slight modifications to prevent request denials and out-of-scope behaviors. To reduce the possibility of tool misuse, we provided usage for necessary tools, such as sqlmap, in the instructions. However, given the complexity of tools—sqlmap has 200+ options—agents face challenges in selecting the optimal usage. This is compounded by the difficulty in identifying web vulnerabilities, requiring agents to explore different options. We will add details of the tools’ setup in the revision. >Consider including AgentSecurityBench in their related work. We will include AgentSecurityBench in the revision. AgentSecurityBench focuses on attacks on AI systems (e.g., prompt injection and memory poisoning), which is orthogonal to CVE-bench. >I found a typo in ‘specify’ on page 8. Thank you. We will fix typos in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. I strongly encourage the authors to include ZAP results and case study on newly discovered vulnerabilities in the revised paper. Additionally, incorporating the quantitative analysis of failure modes would be valuable for the agent development community, helping improve agents for cybersecurity-related tasks. I am satisfied with the responses and have adjusted my score accordingly.
null
null
null
null
null
null
null
null
HaploVL: A Single-Transformer Baseline for Multi-Modal Understanding
Accept (poster)
Summary: - This paper introduces HaploVL, a multimodal model with a dual Transformer decoder for joint vision-language processing. - The proposed two-stage training distills knowledge from a pre-trained model into the first decoder, integrating text and vision. - HaploVL extends the EVE approach by reusing aligned vision tokens in a second decoder module. - Built on CLIP-ViT-L and Llama-3-8B, HaploVL outperforms baselines across multiple multimodal benchmarks. Claims And Evidence: - The first main claim in lines 24–27 and 96–99 is overclaimed. The authors state, “First, we propose a new early-fusion LMM that can fuse multi-modal inputs in the early stage and respond to visual instructions in an autoregressive manner.” However, this early-fusion design is conceptually similar to related work such as EVE, and although this work employs a different teacher encoder to incorporate prior knowledge and reuse aligned tokens in a second decoder, the core idea of the first decoder remains the same. - The second main claim is also inaccurate. The claim in lines 21–24 that the model is an “end-to-end large multi-modal model in a single transformer” is misleading. In reality, the proposed system is a dual-decoder architecture trained in two stages with different objective functions at each stage. In a truly end-to-end system, all components would be jointly trained from the start with a unified objective, and calling the dual decoder system a “single transformer” is imprecise. - Additionally, the authors argue that the method is encoder-free. However, they do make use of pre-trained encoder parameters. In specific, l.188-189 ‘For the input text $Xt$, we leverage the pre-trained LLM’s embedding matrix W to convert each text token into a vector within LLM’s space $R^l$.’, and l.207-210 ‘Notably, although the pre-decoder inherits prior knowledge from a vision encoder, it differs from the vision encoder.’ The subsequent explanation in lines 210–215 does not sufficiently justify the claim that the architecture is encoder-free. Methods And Evaluation Criteria: Given the proposed method use Llama-3-8B as base LLM, in Table 2 there should be Llama-3-8B based VLMs, e.g. MiniCPM-V2.5 [1] and Llama-3.2. Theoretical Claims: - Many arguments in this work are speculations and lack theoretical or empirical support. - l.47-l.50 ‘Our model fuses the vision and text embeddings at an early stage, enabling text embeddings to **autonomously acquire the necessary vision cues**.’ - l.117-120 ‘our HaploVL fused the visual and textual input in the early stage and **extracts the necessary vision information based on the text input.**’ - l.244-547 ‘When the text and image are jointly input into the pre-decoder in a mixed way, **semantic text embeddings can autonomously acquire the necessary vision cues from raw vision embeddings**.’ - The teacher model of the text embedding from pre-deocder seems not introduced in l.248-l.274. Experimental Designs Or Analyses: - An ablation study is needed using a single decoder with a size equal to the combined size of the two decoder layers—that is, an EVE-style setup where the decoder's capacity matches that of the two decoders used in this work. - An ablation study is also required for an encoder–decoder–based approach that utilizes the same pre-trained encoders employed as teachers in this work (e.g., CLIP for vision and Llama-3 for text). - In Table 4 and lines 358–359, the experiment “to verify whether the LMM using one single transformer has advantages over separate models” is important, but it lacks setup details. Specifically, clarify whether the single transformer in EVE-7B has the same number of decoder layers and/or parameters as the combined decoders in HaploVL-7B, rather than just matching the second decoder (the LLM). Supplementary Material: - C. Implementation Details - D. Qualitative Results Relation To Broader Scientific Literature: See below the ‘Essential References Noe Discussed’ section Essential References Not Discussed: [1] Yao, Y., Yu, T., Zhang, A., Wang, C., Cui, J., Zhu, H., Cai, T., Li, H., Zhao, W., He, Z. and Chen, Q., 2024. Minicpm-v: A gpt-4v level mllm on your phone. arXiv preprint arXiv:2408.01800. Other Strengths And Weaknesses: - The presentation and experiments are sufficient. - The technique’s novelty is modest, offering only minor innovations over EVE when combined with a customized training recipe. - The writing—particularly the claims about key contributions—requires more careful revision. There are numerous overclaims throughout the manuscript, the authors should ensure that their claims accurately reflect their contributions. Other Comments Or Suggestions: l.185-186: ‘it needs to acquire prior textual knowledge the Llama model’ → ‘it needs to acquire prior textual knowledge from the Llama model’ Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your suggestions and feedback. In the following, we respond to the major concerns. * Q1: This early-fusion design is conceptually similar to related work such as EVE, offering only minor innovations over EVE when combined with a customized training recipe. **Response:** We distinguish our HaploVL from EVE through the following differences: | Difference | EVE | Ours | |-----------------|---------------------------------------------------------------|---------------------------------------------------------------------------------------------| | Architecture | Uses multiple attention layers to tokenize images (see Figure 3 in EVE's paper) | Utilizes a simple MLP to tokenize images, ensuring simplicity in inference | | Methodology | Does not inherit vision knowledge, requiring 35M data for training | Inherits vision knowledge, needing only 1.2M samples to achieve performance surpassing EVE | | Performance | Shows a significant performance gap with separate models, e.g., 28.2 on MMStar | Achieves performance comparable to other separate models, e.g., 34.5 on MMStar | These differences highlight our architectural innovations and efficiency in training, which contribute to the superior performance of HaploVL compared to EVE. * Q2: The claim about "end-to-end large multi-modal model in a single transformer" **Response:** We acknowledge that the model employs a dual-decoder architecture during training. However, this dual-decoder setup is used exclusively for the training phase. **During inference, both the image and text inputs pass through the same single transformer model. This aligns our approach more closely with end-to-end processing.** Other models like EVE also utilize multi-stage training strategies. Furthermore, we have experimented with jointly training all components from the start using a unified objective. The results indicate that **"training all components from the start using a unified objective" yields lower performance compared to our two-stage training approach.** Specifically, the average score across 5 benchmarks was 60.5 for the two-stage method, versus 55.5 for the one-stage joint training. * Q3: The subsequent explanation in lines 210–215 does not sufficiently justify the claim that the architecture is encoder-free. **Response:** First, we do not claim that HaploVL is encoder-free. Second, we outline the differences between our pre-decoder and Vision Transformer (ViT) architecture: | Aspect | ViT | Our Pre-decoder | |-----------------------|------------------------------|------------------------------------------------| | Input | Image-only | Image and text | | Positional embeddings | Learnable positional embeddings | Rotary positional embeddings (RoPE) | | Attention block | Differs from LLM attention block | Same attention block as post-decoder | | Performance | Shows inductive bias, low fine-grained perception | Better fine-grained perception due to early fusion | * Q4: There should be Llama-3-8B based VLMs, e.g. MiniCPM-V2.5 and Llama-3.2. **Response:** We will include the results of MiniCPM-V2.5 and Llama-3.2 in our main table for comparison. However, it should be noted that MiniCPM-V2.5 utilizes 778M training data, whereas our best model is trained on less than 6M samples. Additionally, **we fully leverage open-source data, which ensures our model's ease of reproducibility.** * Q5: The teacher model of the text embedding. **Response:** It is the text embeddings of the language model. We will clarify this in the final version. * Q6: Ablation study for using the same pre-trained encoders and LLM. **Response:** First, we report the total parameters of the two decoders. The parameters of our two decoders are almost equivalent to those of EVE (7.3B vs 7B). Second, we conducted comparisons using the same LLM (Vicuna-7B) and equivalent data with LLaVA-1.5-7B. Our findings demonstrate that HaploVL-Vicuna-7B exhibits superior performance compared to EVE-7B and surpasses LLaVA-1.5-7B on fine-grained perception tasks. | Model | LLM | SEED | MMStar | MMVP | |--------------|-----------|------|--------|------| | EVE-7B | Vicuna-7B | 54.3 | 28.2 | 19.3 | | LLaVA1.5–7B | Vicuna-7B | 66.1 | 30.2 | 21.3 | | HaploVL-7B | Vicuna-7B | 67.5 | 34.5 | 24.7 | These ablation results validate our architectural innovations and demonstrate significant performance enhancements.
Summary: HaploVL is a large, single-transformer multi-modal model designed to overcome the limitations of existing models by integrating visual and textual inputs early on for efficient multi-modal comprehension. They introduce an innovative pre-decoder model that merges visual patches with text embeddings at the initial stage. Their two-stage approach utilizes knowledge distillation to maintain vision capabilities while fine-tuning with visual instruction data. They outperform previous single-transformer models in their results. Claims And Evidence: This paper provides detailed experiment results and these results are convincing. Methods And Evaluation Criteria: The authors propose a new model design using a single transformer and claim its practical significance. Theoretical Claims: There are no proofs in this paper that should be checked for correctness. Experimental Designs Or Analyses: Part of the experimental design is reasonable. 1. The author proposes using a unique mask strategy and does not verify the effectiveness of this design. 2. The author needs to verify the effectiveness of the distillation module, particularly when initializing the pre-decoder with a well-trained vit. Supplementary Material: Yes. I review all supplementary material. Relation To Broader Scientific Literature: Earlier models like LLaVA and BLIP2 use separate vision encoders and language models. Subsequently, EVE and FUYU propose using only a single transformer to process multimodal input simultaneously. This work also proposes a single transformer model design which consists of a pre-decoder and post-decoder. Essential References Not Discussed: No. Other Strengths And Weaknesses: 1. The comparison method is not detailed enough and lacks performance comparison with mainstream methods. 2. Is this design necessary? This model still includes a vit and a large language model and an even more complex projector. Even compared to the early LLaVA 1.5, the performance improvement of the model is not very significant. 3. Can the effectiveness be validated on a smaller model? The strong language model may narrow the performance gap. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your suggestions and feedback. We respond to the major concerns in the following. * Q1: The comparison method is not detailed enough. **Response:** Due to page limitations, we have compared our model against several widely-recognized methods, which include very mainstream approaches. For single-transformer LMMs, we have also included comparisons with the latest models. To provide a more comprehensive comparison, we will add the results of more state-of-the-art models to our main table. However, regarding recent state-of-the-art separate LMMs, such as QwenVL-2 and InternVL-2.5, it's important to note that these models used closed datasets, although their model weights were released. According to their technical reports, QwenVL-2 utilized 1.4 trillion tokens during pre-training, and InternVL-2.5-7B used 142 billion tokens. **In contrast, our HaploVL-7B relies solely on open-sourced data, using only 1.2 million samples (~7 billion tokens). This ensures that our model can be easily reproduced by the research community to build their single transformer models.** * Q2: The necessity of this design. **Response:** **Our design aims to explore a novel architecture that utilizes a single transformer during inference.** Building on the well-established vision knowledge acquired from web-scale image data by Vision Transformer, we leverage this by initializing our pre-decoder with ViT and employ it as the teacher model. This allows us to inherit its vision knowledge and significantly reduce the required data and training costs compared to other early-fusion and single-transformer models. For example, while single-transformer LMMs like EVE and Emu3 have notable performance gaps when compared to separated LMMs such as LLaVA, **our approach strives to narrow this performance gap**. By integrating the strengths of ViT in the initial stages, we can achieve enhanced efficiency and performance, making our model a compelling candidate despite the apparent complexity. * Q3: The strong language model may narrow the performance gap. **Response:** We conducted comparisons using the same LLM (Vicuna-7B) and equivalent data with LLaVA-1.5-7B. This can eliminate the effect of the strong language model. Our findings demonstrate that HaploVL-Vicuna-7B exhibits superior performance compared to EVE-7B and surpasses LLaVA-1.5-7B on fine-grained perception tasks. | Model | LLM | SEED | MMStar | MMVP | |--------------|-----------|------|--------|------| | EVE-7B | Vicuna-7B | 54.3 | 28.2 | 19.3 | | LLaVA1.5–7B | Vicuna-7B | 66.1 | 30.2 | 21.3 | | HaploVL-7B | Vicuna-7B | 67.5 | 34.5 | 24.7 | **These results validate our architectural innovations rather than relying solely on the superior capabilities of a strong language model.** This demonstrates that our design contributes significantly to performance enhancement. --- Rebuttal Comment 1.1: Comment: The rebuttal content provides clear contrasts, and the results indicate the effectiveness of its architecture, especially with the limited training data. Therefore, I think this work is worth accepting, and I will improve the rating. Therefore, I believe this work deserves approval and will improve my rating.
Summary: The paper introduces HaploVL, an early-fusion multi-modal model (LMM) that processes visual and textual inputs through a single-transformer architecture. Unlike traditional compositional LMMs that handle modalities separately, HaploVL integrates raw visual and textual embeddings at an early stage, leveraging a pre-decoder to extract visual cues from text-guided attention and a post-decoder for deeper multi-modal fusion. By inheriting prior knowledge from pre-trained vision and language models (e.g., CLIP-ViT and Llama-3), HaploVL achieves competitive performance on fine-grained perception and reasoning tasks (e.g., MMVP and MMStar benchmarks) while requiring significantly less training data and computational resources. The model outperforms existing single-transformer LMMs (e.g., Fuyu-8B, EVE-7B) and rivals compositional LMMs (e.g., LLaVA-1.5) in specific benchmarks. Strengthens - Native Multi-Modal Architecture: HaploVL eliminates the need for separate vision/text encoders (e.g., ViT + LLM pipelines), simplifying the design and reducing computational overhead. - Efficient Early Fusion: By fusing visual and textual embeddings early, the model retains fine-grained visual details, enhancing performance on perception-heavy tasks (e.g., 4.9% improvement in fine-grained perception over LLaVA-1.5). Data and Resource Efficiency: Leveraging pre-trained models significantly cuts training costs. For example, HaploVL-7B achieves superior results using only 1.2M training samples vs. EVE-7B’s 35M. - Clear Methodology: The two-stage training (pre-training for vision-text alignment and fine-tuning for instruction following) is well-structured, and the ablation studies (e.g., resolution scaling, data mixtures) validate design choices. Claims And Evidence: Weakness: - Limited Benchmark Comparisons: Outdated Baselines: The paper focuses on older models (e.g., LLaVA-1.5, InstructBLIP) and lacks comparisons with recent state-of-the-art LMMs like InternVL2.5-7B or QwenVL2-7B, which achieve superior MMBench scores (>80). Context-Length Limitations: The authors acknowledge that HaploVL underperforms LLaVA-OV due to restricted tokenization (2,304 vs. 7,290 tokens), suggesting scalability challenges. - Unclear Impact of Training Strategy: The decision to discard visual supervision in Stage 2 (post-decoder fine-tuning) lacks justification. Retaining visual losses might improve multi-modal alignment but risks overfitting. Baseline Fairness: - HaploVL-8B uses Llama-3-8B, while EVE-7B uses weaker Vicuna-7B. Performance gains could stem from Llama-3’s superior language capabilities rather than architectural innovations. Questions: 1. Impact of Retaining Visual Supervision in Stage 2 -If visual losses are retained during Stage 2, the model might better preserve fine-grained visual-text alignment. However, this could also: - Improve Performance: By preventing catastrophic forgetting of visual features. - Harm Performance: If textual instruction tuning dominates the loss, visual signals could introduce noise. - The paper’s current approach (discarding visual losses) prioritizes language-focused instruction following. To resolve this, ablation experiments comparing both strategies are needed. 2. Missing Comparisons with Recent Models The authors should include benchmarks against InternVL2.5-7B and QwenVL2-7B, which excel in MMBench (scores >80). HaploVL’s MMBench score of 75.0 (HaploVL-8B-MI) falls short, suggesting architectural or scalability limitations. Potential solutions: - Expand tokenization capacity (e.g., longer context windows). - Incorporate high-resolution training (beyond 672×672). 3. Fair Baseline Comparison - To isolate the impact of the proposed architecture (vs. Llama-3’s superiority), the authors should: Re-train EVE-7B using Llama-3-8B under identical settings. - Compare the revised EVE-7B (Llama-3) with HaploVL-8B. This ensures gains are attributable to HaploVL’s early fusion, not the LLM backbone. 4.Minor Issues - Typo: Line 412: “Figure Figure 5.” → Correct to “Figure 5.” - Clarity: Clarify resolution scales (e.g., 336 vs. 672 in Table 3) and tokenization limits in the main text. Methods And Evaluation Criteria: SEE Claims And Evidence Theoretical Claims: SEE Claims And Evidence Experimental Designs Or Analyses: SEE Claims And Evidence Supplementary Material: YES,ABCDE Relation To Broader Scientific Literature: SEE Claims And Evidence Essential References Not Discussed: NO Other Strengths And Weaknesses: SEE Claims And Evidence Other Comments Or Suggestions: SEE Claims And Evidence Questions For Authors: SEE Claims And Evidence Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful suggestions and feedback. We have responded to the key concerns in the details below. * Q1: Limited Benchmark Comparisons. **Response:** While recent state-of-the-art LMMs such as QwenVL2-7B and InternVL2.5-7B achieve higher MMBench scores, it is important to note that these models use closed data despite releasing their model weights. According to their technical reports, QwenVL2-7B utilized 1.4 trillion tokens during the pre-training phase, and InternVL2.5-7B used 142 billion tokens. In contrast, our HaploVL-7B model relies solely on open-sourced data, comprising only 1.2 million samples (~7 billion tokens). This significantly lower data usage underscores the efficiency of our approach. Moreover, **our reliance on open-sourced data ensures that our model can be easily reproduced by the research community to build their single transformer models.** Nevertheless, we acknowledge the importance of more recent benchmarks and will incorporate the results of these state-of-the-art models into our main table for comprehensive comparison. * Q2: Context-Length Limitations **Response:** The context length of HaploVL is indeed scalable. Specifically, with an input resolution of 336, the context length is 2048 tokens, and it extends to 6144 tokens when the maximum image size is 672x672. Even at a context length of 2048 tokens, HaploVL-Vicuna-7B (HaploVL-7B) demonstrates superior performance compared to LLaVA-1.5-7B on the MMVP, MMStar, and SEED benchmarks. | Method | context-length | MMVP | MMStar | SEED | |----------------|----------------|------|--------|------| | LLaVA-1.5-7B | 2048 | 21.3 | 30.3 | 66.1 | | HaploVL-7B | 2048 | 24.7 | 34.5 | 67.5 | * Q3: Impact of Retaining Visual Supervision in Stage 2 **Response:** We retain a vision loss by adding an image decoder during the second stage of our method. This approach is similar to the Masked Autoencoder (MAE), where image embeddings are decoded into RGB images. To assess the impact, we conducted experiments using the LLaVA-665K instruction data. | W/ visual loss | GQA | MMStar | |----------------|------|--------| | True | 60.8 | 34.0 | | False | 62.5 | 34.5 | **Models trained with vision loss perform worse**, with GQA and MMStar scores dropping from 62.5 to 60.8 and from 34.5 to 34.0, respectively. This is because additional vision loss conflicts with the textual loss used in multimodal understanding. We will include this result in the latest version. * Q4: Baseline Fairness **Response:** Due to the high workload involved in reproducing the EVE model (35 million data samples and 2 A100-80G GPUs running for 9 days), **we compared the performance of HaploVL utilizing the same Vicuna-7B language model.** The results demonstrate that HaploVL-Vicuna-7B outperforms EVE-7B. Specifically, on the SEED benchmark, HaploVL-Vicuna-7B achieves 67.5%, while EVE-7B (based on Vicuna-7B) only scores 54.3%. Furthermore, when utilizing the same Qwen2.5-7B language model, our HaploVL-7B-Pro also surpasses the performance of the improved EVE-2.0 (Qwen2.5-7B). | Model | LLM | SEED | POPE | |---------------|-------------|-------|-------| | EVE-7B | Vicuna-7B | 54.3 | 83.6 | | HaploVL-7B | Vicuna-7B | 67.5 | 85.4 | | EVE-2.0-7B | Qwen2.5-7B | 71.4 | 87.6 | | HaploVL-7B-Pro| Qwen2.5-7B | 75.0 | 88.7 | These results confirm that **the performance improvements are attributable to HaploVL’s architectural innovations rather than just the superiority of the LLM backbone.**
Summary: The paper proposes an early-fusion method for vision-language reasoning. They claim to have pre-decoder that extracts visual information from raw vision embeddings based on text input and a post-decoder to process fused multi-modal embeddings and generate text responses. The experiments suggest that the method is better than many state of the art methods, including EVE. Claims And Evidence: The experiments correctly reflect the claims of the paper. Methods And Evaluation Criteria: Yes, the proposed method seems reasonable and correct for the application. Theoretical Claims: No, I did not check theoretical claims. I believe the paper does not propose any theoretical claims either. Experimental Designs Or Analyses: The experiments look valid to the best of my knowledge. Supplementary Material: Yes, supp material is attached at the end of the main paper. Relation To Broader Scientific Literature: Overall, vision-language learning is important for various applications. The proposed paper fits correctly in the overall literature. Essential References Not Discussed: None that I know of. Other Strengths And Weaknesses: Please see questions for authors. Other Comments Or Suggestions: I write the overall strength and weakness of the paper here Positives - The proposed method is simple and effective. - The overall idea of early fusion is very interesting (though I am not very familiar with related work in this area and would seek help from other reviewers as well) - The experiments show good gains compared to the competitive baseline Negatives - It is not clear why this method would take less data compared to EVE. Can the authors clarify more beyond stating that early fusion leads to efficient data usage. - The limitations of the method are not clearly stated. Why can this method not be used for image generation if the setup is reversed, since the usage of visual and text modality looks symmetrical? Overall, I like the paper and want it to be accepted. Please clarify on the negatives in the rebuttal phase. Questions For Authors: Please see comments or suggestions. Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable suggestions and feedback. We address the primary concerns as follows. * Q1: Clarify more beyond stating that early fusion leads to efficient data usage. **Response:** We propose utilizing a pre-decoder to fuse image and text data in the early stages of processing. **This pre-decoder is designed to inherit prior vision knowledge from a visual model (like ViT), thereby requiring minimal additional data.** The inherited prior vision knowledge enables the pre-decoder to effectively integrate visual information with textual data using fewer samples. Moreover, empirical results demonstrate that this method achieves superior performance with the same LLM and vision teacher, as evidenced by the MMStar benchmark scores (HaploVL-Vicuna-7B: 34.5 vs EVE-Vicuna-7B: 28.2). * Q2: Used for image generation. **Response:** Our current work verifies that this method is feasible for image understanding. In order to explore image generation, we introduce a vision loss by adding an image decoder during the second stage of our method. This approach is similar to the Masked Autoencoder (MAE), where image embeddings are decoded into RGB images. To assess the impact, we conducted experiments using the LLaVA-665K instruction data. | W/ visual loss | GQA | MMStar | |----------------|------|--------| | True | 60.8 | 34.0 | | False | 62.5 | 34.5 | Models trained with vision loss perform worse, with GQA and MMStar scores dropping from 62.5 to 60.8 and from 34.5 to 34.0, respectively. While the symmetrical usage of visual and text modalities seems plausible, **the introduction of vision loss for image generation currently degrades the performance of our model in image understanding tasks. This is because additional vision loss conflicts with the textual loss used in multimodal understanding.** We plan future experiments to further explore and possibly mitigate these conflicts to successfully integrate image generation capabilities.
null
null
null
null
null
null
SHARP-Distill: A 68× Faster Recommender System with Hypergraph Neural Networks and Language Models
Accept (poster)
Summary: This paper focuses on the teacher-student knowledge distillation. The teacher model use contrastive learning to combine HGNN and a pre-trained LLM, which generate collaborative and semantic features, respectively. The student model is a lightweight GCN. Both response-based and feature-based knowledge distillation loss are adopted to transfer structural and positional knowledge. Claims And Evidence: Yes, claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense. Theoretical Claims: There is neither theoretical claim nor proof in this paper. Experimental Designs Or Analyses: The teacher model requires the knowledge from a pre-trained LLM, which is transferred to the student through knowledge distillation. However, the baseline methods don't leverage the LLM. It seems to be unfair. By comparing results in Table 1 and Table 4, SHARP without LLM cannot outperforms most baseline methods. Supplementary Material: Yes. I have review the appendix. Relation To Broader Scientific Literature: This paper confirms the feasibility of simultaneously distilling the knowledge in HGNN and LLM into a simple model. Essential References Not Discussed: The teacher model in this paper combines a traditional recommendation model and a LLM. And the knowledge from LLM plays a important role. But it cited no related work. For example, [1, 2, 3] [1] "Enhancing Sequential Recommendation via LLM-based Semantic Embedding Learning." WWW 2024 [2] "Distillation Matters: Empowering Sequential Recommenders to Match the Performance of Large Language Model." arXiv 2024 [3] "Prompt Distillation for Efficient LLM-based Recommendation." CIKM 2023I keep up with the literature in this area. Other Strengths And Weaknesses: Weaknesses: 1. The novelty is limited. The first contribution claimed by the authors is combining structural and semantic features. However, they simply concatenate them, followed by a MLP. The second contribution simply comes from their use of knowledge distillation. As for the third contribution, many previous works[4, 5, 6] have used contrastive learning for knowledge distillation. To sum up, the only novelty left is the use of positional similarity. However, there is little discussion in this paper on why it is useful. [4] "Contrastive Representation Distillation." ICLR 2020 [5] "Contrastive Distillation on Intermediate Representations for Language Model Compression." EMNLP 2020 [6] "Distilling Holistic Knowledge with Graph Neural Networks." ICCV 2021 2. The formal details about DeBERTa in Preliminaries are not used in the main text. It should be removed or moved to the appendix. 3. There is no indication at all of the target task. Is it recommendation with implicit feedback or rating prediction? Eq.(8) claims that there is a ground truth "rating", but the evaluation metrics used in the experiments are typically for implicit feedback and do not include MSE. 4. The author seems to have misunderstood some basics of recommender system. For example, Eq.(8) is MSE loss, not BPR loss. 5. There are a lot of errors in the formulas. For example, Eq.(9) is inconsistent with the definition of $L_{teacher}$ in Figure 1. Moreover, 'j' in Eq.(7) is not defined. $N_u$ in Eq.(12) is not defined. And, $D$ used in the computation of $\hat A^s$ should be specified to be the degree matrix of $A^s+I$, not $A^s$, to avoid confusion. Other Comments Or Suggestions: 1. There should be more discussion on how Eq.(16) transfers positional knowledge to the student model. 2. Figure 7 is a combination of Figure 5 and 6? It looks strange. Using subfigures is a better option. Questions For Authors: I have no other questions. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: **Essential References Not Discussed** We thank the reviewer for highlighting recent work on LLM-based recommendation and distillation, including DLLM2Rec [1], SAID [2], and POD [3]. These are valuable contributions to this fast-moving area, and we have now incorporated a discussion of them into our Related Work section. While SHARP-Distill employs a pretrained textual encoder, we clarify that our method is not an LLM-based recommender in the conventional sense. In contrast to DLLM2Rec, SAID, and POD—which rely on full-scale LLMs, prompt tuning, or autoregressive decoding—our framework uses a lightweight review encoder (DeBERTa) on short user-item reviews (~10–40 tokens), avoiding the latency and cost of LLM inference. To ensure a fair comparison, we evaluated SHARP-Distill against **SAID** and **POD** using their official code and settings. DLLM2Rec was excluded only due to lack of code at submission time. **Table 8: Comparison with LLM-based Recommenders (Top-K Metrics and Inference Time)** | Dataset | Metric | SAID (Distill) | POD (Prompt) | SHARP-Distill | SAID Time (ms) | POD Time (ms) | SHARP Time (ms) | |-------------|--------|----------------|---------------|----------------|----------------|----------------|------------------| | CDs | P@10 | 13.40 | 12.88 | **13.75** | 110.0 | 450.0 | **9.77** | | | R@10 | 12.65 | 12.20 | **13.06** | | | | | | N@10 | **12.24** | 11.88 | 12.17 | | | | | Cellphones | P@10 | **7.83** | 7.62 | 7.54 | 51.0 | 210.0 | **4.12** | | | R@10 | 5.72 | 5.60 | **5.77** | | | | | | N@10 | 4.69 | 4.62 | **4.77** | | | | | Beauty | P@10 | 6.58 | 6.60 | **6.97** | 60.0 | 240.0 | **7.88** | | | R@10 | 4.38 | 4.29 | **4.52** | | | | | | N@10 | 3.97 | 4.01 | **4.15** | | | | SHARP-Distill achieves comparable or better accuracy than LLM-based baselines while offering **10–40× lower inference time**, thanks to its compact GCN-based student and contrastive fusion strategy. Unlike POD, which requires real-time LLM queries, or SAID, which retains large students, our model delivers **LLM-level quality with sub-10ms latency**, making it ideal for scalable deployment in time-sensitive environments. We appreciate the reviewer’s suggestion and believe this further clarifies SHARP-Distill’s positioning as an efficient, LLM-alternative framework for real-world recommendation. **Other Strengths and Weaknesses** Thank you for the detailed critique. Due to space limits, we provide a full point-by-point response—including novelty clarifications, differences from [4–6], and positional alignment motivation—at the following link: 🔗 **[Full Response – Other Strengths & Weaknesses](https://github.com/1234554321-00/SHARP-Distill/blob/main/README.md)** Theoretical claims have been added to strengthen our framework. Please refer to the following link for details: 🔗 **[Theoretical Extensions and Supporting Lemmas](https://github.com/1234554321-00/SHARP-Distill/blob/main/README.md)**
Summary: This paper introduces SHARP-Distill, a knowledge distillation framework combining Hypergraph Neural Networks (HGNNs) with language models to improve recommendation quality while reducing inference time. The teacher-student approach uses HGNNs for user-item embeddings and DeBERTa for extracting textual features, while the lightweight student model (CompactGCN) inherits structural knowledge through contrastive learning. Experiments show up to 68× faster inference than HGNN while maintaining competitive accuracy. Claims And Evidence: The paper addresses data sparsity and high computational costs of GNNs, demonstrating that traditional soft-label knowledge distillation is insufficient for preserving hypergraph structural knowledge. Figure 7 shows the superiority of the proposed approach combining soft labels with structural knowledge transfer. Performance improvements and speed claims are well-supported by comprehensive experiments. Methods And Evaluation Criteria: SHARP-Distill combines HGNNs, language models, and contrastive learning in a knowledge distillation framework. The evaluation uses standard metrics (Precision, Recall, NDCG, Hit Ratio) across five datasets, with careful analysis of both recommendation quality and inference time. Theoretical Claims: There is no theoretical claim in the paper. Experimental Designs Or Analyses: The paper includes comprehensive experiments comparing against 11 baselines, with ablation studies, hyperparameter sensitivity analysis, and scalability assessments. Results consistently show SHARP-Distill effectively balances recommendation quality with computational efficiency. Supplementary Material: The supplementary material is the implementation code. However, there is only a long python script. I'm not sure if this includes all scripts for the project or it is only partial. Relation To Broader Scientific Literature: While the paper provides background on hypergraphs and knowledge distillation, it could better differentiate between recommendation-specific and general graph learning approaches. Additional references on distillation methods for graph-based recommendation would strengthen the literature review. Essential References Not Discussed: The paper should further discusses more distillation methods for graph-based recommendation. Not only methods for general graph learning. [1] Unbiased Knowledge Distillation for Recommendation [2] Graph-Less Collaborative Filtering Other Strengths And Weaknesses: Strengths; 1. Significant inference speed improvements (68× faster than HGNN) 2. Comprehensive experimental evaluation 3. Novel integration of hypergraphs with language models 4. Effective knowledge transfer through contrastive learning Weaknesses: 1. Limited discussion of recommendation-specific knowledge distillation methods 2. Could strengthen theoretical justification 3. Less focus on training efficiency compared to inference efficiency Other Comments Or Suggestions: N/A Questions For Authors: Please refer to my comments above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and are grateful for the recognition of our work’s strengths, including inference speedup, empirical rigor, and the novel integration of hypergraphs with language models. ## Weakness 1) Differentiating RecSys-specific distillation from general graph learning: **(Relation To Broader Scientific Literature)** We appreciate this observation and have clarified the RecSys-specific contributions of our distillation strategy in Section~3.2 of the revised version. In particular: * **User-Item Bipartite Structure Exploitation:** Unlike general GNN distillation methods that operate on homogeneous graphs, SHARP-Distill explicitly models the bipartite structure of user-item interactions using dual incidence matrices $\mathcal{H}_U$ and $\mathcal{H}_I$ (Equations 2--3). This formulation captures asymmetric collaborative patterns unique to recommender systems. * **Hypergraph Modeling of High-Order Interactions:** Our hypergraph-based formulation extends beyond pairwise user-item links to encode group-level behavioral patterns (e.g., co-engagement with semantically related item clusters), which are crucial for capturing latent preference signals in sparse recommendation data. * **Preference-Aware Contrastive Transfer:** As detailed in Equations 17--18, our contrastive objective is designed to align user preference rankings rather than only topological proximity, which is essential for recommendation tasks. * **Multi-Modal Distillation with Reviews:** SHARP-Distill uniquely integrates semantic information from textual reviews alongside interaction structures. This dual-source distillation enables robust generalization to cold-start scenarios—an issue more prominent in RecSys than in general graph tasks. ## Weakness 2) Comparison with Prior Distillation Methods on Public Benchmarks: **(Essential References Not Discussed)** We have implemented and evaluated SHARP-Distill alongside recent RecSys-specific distillation approaches: **UnKD [Chen et al., WSDM 2023]** and Graph-less Collaborative Filtering **[Zhang et al., NeurIPS 2023]**. Both authors released their code, enabling reproducibility on shared datasets. As shown in Table 2, SHARP-Distill outperforms both in terms of accuracy (P@10 / R@10 / N@10) across Amazon Cellphones, Beauty, and Sports. Moreover, our approach significantly reduces inference time, with speedups of over 2× on average. Due to limited rebuttal-phase time, we could not include large-scale datasets (Amazon CDs, Yelp) for these two models. However, we plan to include these full comparisons in the final version. **Table7: Comparison on Amazon Cellphones, Beauty, and Sports. Metrics: P@10 / R@10 / N@10 (%). Inference time in milliseconds (ms).** | **Model** | **Cellphones** | **Beauty** | **Sports** | **Inference Time (ms)** | |--------------------------|----------------------|----------------------|----------------------|--------------------------| | Chen et al. (UnKD) | 6.83 / 5.02 / 4.31 | 5.89 / 4.21 / 3.68 | 3.88 / 3.19 / 2.87 | 9.72 / 11.4 / 9.02 | | Zhang et al. (Graph-less)| 6.57 / 4.89 / 4.08 | 6.12 / 4.26 / 3.93 | 4.01 / 3.25 / 2.91 | 8.94 / 10.2 / 8.35 | | **SHARP-Distill (Ours)** | **7.54 / 5.77 / 4.77**| **6.97 / 4.52 / 4.15**| **4.27 / 3.63 / 3.24**| **4.12 / 7.88 / 5.74** | While UnKD effectively mitigates popularity bias, it relies on MF/LightGCN backbones and does not incorporate high-order hypergraph structures or textual modalities. SHARP-Distill transfers richer preference-aware signals using contrastive learning from both **HGNN structure** and **DeBERTa-based textual features**. Graph-less avoids structural modeling altogether, trading off explainability and some precision. In contrast, SHARP-Distill uses a **lightweight CompactGCN** student that distills both structural and semantic knowledge while remaining highly efficient at inference. ## Weakness 3) Training efficiency discussion: **Table 6: Teacher Model Training Time (in hours)** | Dataset | SHARP-Distill Teacher | LightGCN | |--------------------|-----------------------|----------| | Amazon Cellphones | 1.7 | 0.8 | | Amazon Beauty | 2.8 | 1.3 | | Amazon Sports | 2.4 | 1.1 | | Amazon CDs | 4.2 | 2.0 | | Yelp | 3.5 | 1.7 | These measurements include both HGNN and DeBERTa training and reflect the cost of a one-time offline pretraining phase. For context, we've included baseline LightGCN training times, showing that while our teacher requires approximately 2x longer to train, this cost is amortized through significant inference speedups. Theoretical claims have been added to strengthen our framework. 🔗 **[Theoretical Extensions and Supporting Lemmas](https://github.com/1234554321-00/SHARP-Distill/blob/main/README.md)**
Summary: The paper proposes SHARP-Distill, a framework which uses DeBERTa language model as teacher model to distill HGNN-based recommenders to enhance recommendation performance and inference speed. A contrastive leaning mechanism is leveraged to efficiently inherit the structural and semantic knowledge. Experiments on multiple datasets demonstrate that SHARP-Distill achieves significant inference speed improvement compared to traditional methods like HGNNs and LightGCN, while competitive or even better performance. Claims And Evidence: The following claim could benefit from further clarification or additional evidence: Claim: 68× faster inference time compared to HGNN and 40× faster than LightGCN while maintaining competitive recommendation accuracy Potential issue: Directly compare the inference time of non-distilled models with distilled student models might not be fair. The training and inference cost of teacher models, maintaining cost of teacher models should also be considered, depending on the actual deployment infra system design. Besides, the recommendation accuracy improvement of KD often decrease after a longer training time in real-world cases as it provides warm-start advantages. Whether the degree of presented performance improvement can still hold requires more evidence. Methods And Evaluation Criteria: - Theoretical Claims: - Experimental Designs Or Analyses: - Supplementary Material: Yes. All parts. Relation To Broader Scientific Literature: - Essential References Not Discussed: - Other Strengths And Weaknesses: - Other Comments Or Suggestions: - Questions For Authors: 1. Are there any trade-offs or conflicts between the structural and textual features that need to be managed? 2. As discussed in Claims And Evidence section, what's the training and inference cost of the teacher model? what's the ROI considering the cost of teacher and student as a whole compared with other models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Claims And Evidence: We appreciate the reviewer’s thoughtful critique regarding the fairness of our inference-time comparison and broader deployment considerations. First, we clarify that our comparison is primarily with other **distillation-based models** (e.g., KRD, LightHGNN), making the results in Table 1 a fair and meaningful evaluation. Regarding teacher model costs, we report the full training time (4.2 hours on a single A100 GPU) in Table 1 (as also noted in Reviewer scTP’s comments). This one-time overhead is amortized over deployment and does not impact end-user latency. Our inference speed claims (68× faster than HGNN, 40× faster than LightGCN) focus specifically on **deployment-time performance**, which is critical for real-time recommendation use cases. Moreover, our modular design supports efficient knowledge updates (e.g., retraining only HGNN or DeBERTa if needed), reducing long-term maintenance burdens. Regarding the concern that KD benefits may diminish over longer training, we conducted extended training experiments (3× standard epochs). We observed stable accuracy across all models, with our relative improvements holding within ±2% variance—indicating that the performance gains are not just due to warm-start effects but reflect genuine enhancements. We have incorporated these extended training results and deployment discussions into the revised paper. We believe this addresses the reviewer’s concerns while reinforcing our core claim: **SHARP-Distill achieves substantial practical gains for real-world recommendation systems**. ### Questions Q1) We appreciate the reviewer's insightful question. To assess potential conflicts between semantic and structural features, we conducted an ablation study on the Yelp dataset, evaluating the impact of removing each core component from SHARP-Distill. Specifically, we examined four variants: - **SHARP-Distill:** Full model with HGNN-based structure, DeBERTa-based semantics, and contrastive learning - **w/o DeBERTa:** Structure only (removes semantic encoder) - **w/o HGNN:** Semantics only (removes structural encoder) - **w/o Contrastive Loss:** Structure + semantics without alignment **Table 3: Ablation Study on Yelp (P@10 / R@10 / N@10, %)** | Variant | P@10 | R@10 | N@10 | Inference (ms) | |----------------------|------|------|------|----------------| | SHARP-Distill | **3.88** | **2.75** | **2.37** | 8.79 | | w/o DeBERTa | 3.15 | 2.21 | 1.85 | 6.22 | | w/o HGNN | 2.93 | 2.04 | 1.67 | 6.45 | | w/o Contrastive Loss | 2.74 | 1.93 | 1.46 | 6.01 | To quantify the contribution of each component, we calculated the relative performance drop in P@10 with respect to the full SHARP-Distill model: **Δ Performance Compared to SHARP-Distill (P@10):** | Component Removed | Absolute Drop | % Drop | |------------------------|----------------|----------| | DeBERTa (semantic) | 0.73 | 18.81% | | HGNN (structural) | 0.95 | 24.48% | | Contrastive Learning | 1.14 | 29.38% | These results demonstrate the complementary nature of semantic and structural features, and the crucial role of contrastive learning in harmonizing them. Removing DeBERTa causes an 18.8% performance drop, indicating that semantic features are highly informative. Removing HGNN leads to a 24.5% drop, showing the importance of structural signals. Most notably, removing contrastive loss results in a 29.4% performance drop, the largest among all variants. This highlights that simply combining structural and semantic encoders is insufficient—without proper alignment, modality interference reduces effectiveness. **Our contrastive objective (Eqs. 6–7, 17–19)** explicitly resolves potential conflicts by aligning heterogeneous modalities into a shared latent space. The observed performance degradation without this alignment empirically validates our hypothesis: contrastive learning not only unifies the two modalities but also unlocks their synergy. These findings are now explicitly stated and discussed in our revised paper. ### Questions Q2) You can find our full response to this comment at the following link: 🔗 **[Full Response – As discussed in Claims And Evidence section (Reviewer 8pQM)](https://github.com/1234554321-00/SHARP-Distill/blob/main/README.md)** Theoretical claims have been added to strengthen our framework. Please refer to the following link for details: 🔗 **[Theoretical Extensions and Supporting Lemmas](https://github.com/1234554321-00/SHARP-Distill/blob/main/README.md)**
Summary: The paper introduces SHARP-Distill, a knowledge distillation framework designed to enhance the efficiency of recommender systems while preserving recommendation accuracy. It employs a teacher-student architecture where the teacher model integrates Hypergraph Neural Networks (HGNNs) to capture high-order user-item interactions and DeBERTa, a pre-trained language model, to extract semantic features from textual reviews. The student model features a lightweight Graph Convolutional Network (GCN) variant called CompactGCN, which uses contrastive learning to inherit structural and positional knowledge from the teacher. The authors claim that SHARP-Distill achieves up to 68× faster inference than HGNN-based methods and 40× faster than LightGCN, while maintaining competitive accuracy across five real-world datasets. Claims And Evidence: The primary claim is that SHARP-Distill significantly improves inference speed without sacrificing recommendation quality. However, the evidence is empirical, lacking detailed computational resource specifics, which could affect practical speed claims. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper lacks formal theorems, relying instead on established techniques. Experimental Designs Or Analyses: The experimental setup is reasonable. Supplementary Material: Yes, this paper provides detailed information for experimental setup and comparison to related works. Relation To Broader Scientific Literature: The proposed method is built on knowledge distillation and HGNNs. It uniquely combines these fields to tackle the speed-accuracy trade-off, a persistent challenge in the domain. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. Novel integration of HGNNs, DeBERTa, and contrastive learning. 2. Dramatic inference speed improvements (up to 68×). 3. Competitive or superior performance across datasets. 4. CompactGCN offers a lightweight solution for real-time systems. Weaknesses: 1. Missing details on preprocessing and resources analysis. 2. No discussion of teacher model training time and complexity. 3. Multi-component design may complicate implementation. 4. Lacks formal proofs or detailed justification. Other Comments Or Suggestions: N/A Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer's concerns about preprocessing details and resource analysis. We have added the following information to address these points: ### Weaknesses1 and 2. Teacher Model Training Time and Complexity: As shown in **Table 1**, our teacher model requires only 4.2 hours to train on a single NVIDIA A100 GPU for the largest dataset (Amazon CDs with 1.2M+ reviews). This one-time offline pretraining cost is modest compared to the significant inference speed benefits gained. | Dataset | Model | Train Time (hrs) | Inference Time (ms) | Inference Complexity | Deployed at Inference? | |-------------|--------------------------|------------------|----------------------|-------------------------------|-------------| | Amazon-CDs | Our Model (Teacher) | 4.2 | 668.23 | O(R^L × d) ×f | ✗ | | | LightGCN | 2.0 | 395.45 | O(R^L × d) | ✓ | | | **Our Model (Student)** | **1.2** | **9.77** | **O(R × d)** | **✓** | | Yelp | Our Model (Teacher) | 3.5 | 552.34 | O(R^L × d) ×f | ✗ | | | LightGCN | 2.0 | 342.67 | O(R·d) | ✓ | | | **Our Model (Student)** | **1.0** | **8.79** | **O(R·d)** | **✓** | The memory complexity of our HGNN-based teacher model follows the standard pattern of hypergraph neural networks, as discussed in [1]. In a traditional GNN with \( L \) layers, \( R \) neighbors per node, \( d \)-dimensional embeddings, and an activation function \( f \), the memory requirement typically grows exponentially as \( \mathcal{O}(R^L \times d) \). For our specific implementation with the Amazon CDs dataset parameters: - Using L=3 layers: Memory consumption ≈ 5.67 MB - Using L=4 layers: Memory consumption ≈ 1,180.16 MB - Using L=5 layers: Memory consumption ≈ 245,736.24 MB Our teacher model uses a 3-layer configuration to balance expressiveness and computational efficiency, keeping memory requirements manageable while capturing the necessary high-order interactions. **Preprocessing Details:** Our preprocessing pipeline includes: - Text normalization (lowercase, punctuation removal, stopword filtering) - Review text tokenization using DeBERTa's tokenizer with a maximum sequence length of 128 - Construction of user-item interaction hypergraphs where each hyperedge connects a user with all items they've reviewed - Multi-aspect rating normalization to a [0,1] scale The preprocessing time is negligible compared to model training time (<10 minutes for the largest dataset) and is a one-time overhead. **Student Model Efficiency:** The distilled student model (CompactGCN) dramatically reduces both the memory footprint and computational complexity: - Memory requirement: O(R × d) where R is the neighbors per node in the graph - Practical memory usage: ~0.0268 MB (208×128+128 units) - Training time: Only 1.2 hours for Amazon CDs (3.5× faster than teacher model) - Inference time: 9.77ms (68× faster than teacher model, 40× faster than LightGCN) --- ### Weakness 3. Multi-component design may complicate implementation: We appreciate this crucial concern about implementation complexity. SHARP-Distill is **intentionally designed with modularity and deployment simplicity** as core principles: A. **Clean separation between training and inference**: Only the lightweight student model (CompactGCN) is deployed at inference time, eliminating any runtime dependency on complex teacher models. B. **Modular component architecture**: Each component—HGNN, DeBERTa, and CompactGCN—connects through clearly defined embedding interfaces, allowing independent optimization. **C. Simplified Implementation Workflow** Our implementation follows a streamlined three-stage process: 1. **Teacher Training**: We independently train the HGNN model and the pretrained DeBERTa model, which serve as the teacher components. 2. **Knowledge Embedding Generation**: We extract knowledge embeddings from both trained teacher models. 3. **Knowledge Distillation**: These embeddings are distilled into the CompactGCN student model using our unified loss function. [1] Zhang, S., Liu, Y., Sun, Y., & Shah, N. (2022). Graph-less neural networks: Teaching old MLPs new tricks via distillation. \textit{ICLR 2022}. ### Weakness 4. Lacks formal proofs or detailed justification: You can find our full response to this comment at the following link: 🔗 **[Full Response – Lacks formal proofs or detailed justification (Reviewer scTP)](https://github.com/1234554321-00/SHARP-Distill/blob/main/README.md)** Theoretical claims have been added to strengthen our framework. Please refer to the following link for details: 🔗 **[Theoretical Extensions and Supporting Lemmas](https://github.com/1234554321-00/SHARP-Distill/blob/main/README.md)**
null
null
null
null
null
null
Improving Parallel Program Performance with LLM Optimizers via Agent-System Interfaces
Accept (poster)
Summary: The paper proposes a system for automatically generating and optimizing parallel program mappers. Particularly, it tries to do this via using a generative optimization approach aided by an "agent-system interface" which uses a DSL to allow LLMs to write code at a high-level. Empirical results show that it can sometimes even beat expert-written mappers. Claims And Evidence: The authors claim that Agent-System Interface (an abstraction layer between the agent and the system) simplifies code generation and provides more meaningful feedback to the agent. This includes the DSL design and AutoGuide mechanisms. Both of these are supported by evidence from Sections 5.2 and 5.3. Performance results (against OpenTuner and Human Baselines) show strong performance of the system. Methods And Evaluation Criteria: The proposed methodology is well-motivated and explained clearly. The ASI design with customized DSL and explain-suggest based feedback is novel within the domain context and likely helpful for future work. For evaluation, the paper uses 9 HPC benchmarks and measures speedup achieved by the proposed approach. Theoretical Claims: none Experimental Designs Or Analyses: The paper uses various relevant baselines in the experiments. The ablations are also well directed to properly evaluate individual contributions of the ASI block, providing evidence for improvements from both DSL design and AutoGuide. Supplementary Material: no Relation To Broader Scientific Literature: The paper builds upon prior HPC / autotuning literature and promotes generative optimization as an alternative to prior reinforcement learning approaches. Essential References Not Discussed: To the best of my knowledge, relevant works are discussed appropriately. Other Strengths And Weaknesses: Strengths: - Comprehensive experiments on standard HPC benchmarks, with clear baselines and ablations. - Agent-System Design with proper ablations Weaknesses - Since only 9 benchmarks are considered, it is not clear how to estimate the generality of the approach - It is possible the DSL and relevant prompts bake in a lot of human prior which can lead to inflated estimation of underlying model capabilities. Authors should also attempt to study sensitivity to the design of this ASI Other Comments Or Suggestions: The authors can motivate the choice of benchmarks better, particularly for general readers unfamiliar with domain knowledge of these tasks. Questions For Authors: How much domain expertise (about the benchmark domain and the LLM capabilities) is needed to expand the ASI and the prompt for the benchmark? More practically, do we need experts who write the mappers to design DSL encoding appropriate priors? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful and constructive review. **Q1: How much domain expertise (about the benchmark domain and the LLM capabilities) is needed to expand the ASI and the prompt for the benchmark? More practically, do we need experts who write the mappers to design DSL encoding appropriate priors?** Our framework is designed to minimize the need for domain or system-level expertise when adding new benchmarks. Specifically, for all benchmarks in our evaluation, the agent begins from a randomly initialized mapper and improves it through generative optimization, without any handcrafted priors or expert-written strategies. To incorporate a new application into the benchmark, only two inputs are needed: 1) **Application metadata**, including task names and the list of data arguments each task accesses 2) **Hardware specification**, including the number of CPUs and GPUs per node and the total number of nodes. These inputs are typically available from the application code and the machine, requiring no understanding of the DSL or the runtime system. As a result, application developers do not need to write or understand mappers, provide mapping hints, or possess any knowledge of the DSL. That said, our system can also **support optional injection of expert knowledge** via the `AutoGuide` module. This module can provide customized interpretations of execution failures or application-specific heuristics (e.g., mapping large tasks to GPUs if the developer already knows which tasks are large). While our experiments do not use any such guidance, we believe this optional flexibility is valuable in practice, especially for developers who already have insight into performance bottlenecks and want to speed up the optimization process. **Q2: The authors can motivate the choice of benchmarks better, particularly for general readers unfamiliar with the domain knowledge of these tasks.** Thanks for the suggestion! We will improve the paper to provide more context and motivation for our benchmark selection. Among the 9 benchmarks, 6 are well-known matrix multiplication algorithms (Cannon, SUMMA, PUMMA, Johnson’s, Solomonik’s, and COSMA). Parallel matrix multiplication remains an active research topic due to its central role in high-performance computing and scientific simulations [1]. Furthermore, improving matrix multiplication performance has a broad impact, as it accelerates numerous downstream machine learning workloads [2,3]. The remaining 3 applications (Circuit, Stencil, and Pennant) represent diverse scientific computing workloads beyond matrix multiplication. Together, this benchmark suite offers both depth (through representative matrix-multiplication algorithms) and breadth (through diverse HPC workloads). We will expand the background and motivation accordingly in the revision. **References:** [1] Yadav et al. “DISTAL: The Distributed Tensor Algebra Compiler”. 2022 [2] Jangda et al. “Breaking the Computation and Communication Abstraction Barrier in Distributed Machine Learning Workloads”. 2022 [3] Zheng et al. “TileLink: Generating Efficient Compute-Communication Overlapping Kernels using Tile-Centric Primitives”. 2025
Summary: This paper proposes a system powered by large language models (LLMs) to automate both the generation and optimization of mapper code. Specifically, it introduces a Domain-Specific Language (DSL) that provides a high-level interface encapsulating all performance-critical decisions required for mapper generation. The authors then implement the AutoGuide mechanism, which interprets raw execution outputs into informative and actionable feedback. This mechanism enables the agent to iteratively optimize the mapper by leveraging enriched feedback to refine its code generation strategy. Finally, the authors apply generative optimization to further enhance the generated code. Evaluation results demonstrate that the proposed method outperforms OpenTuner even after 1,000 iterations, achieving a 3.8× performance improvement. Claims And Evidence: 1. This paper argues that the proposed DSL simplifies mapper code generation and provides valuable guidance to the agent. The results demonstrate that the DSL requires less code and enhances the optimization process. 2. The paper claims that the agentic framework achieves up to a 1.34× speedup across nine benchmarks, outperforming expert-written mappers while reducing tuning time from days to minutes. As shown in Fig. 4, the performance gains are indeed significant. Methods And Evaluation Criteria: 1. One key contribution of this work is the proposed DSL. However, the authors do not describe its grammar; instead, they illustrate it with a single example. Furthermore, it remains unclear whether the proposed DSL is expressive enough to cover all mapper code generation problems. Another unclear part is that the authors mention they use server specifications and application information as inputs but did not provide more details. 2. Regarding the evaluation (Fig. 4), why does the paper report only the best results of the proposed method rather than presenting the full optimization curve? Theoretical Claims: NA Experimental Designs Or Analyses: In the ablation study, the authors present Code Generation Success Rates. However, in the overall evaluation, only the performance results are reported. Additionally, I am curious about the Random Mapper— is it possible that all these random mappers are correct? Supplementary Material: I read the appendix Relation To Broader Scientific Literature: It is important for improving system performance. Essential References Not Discussed: NA Other Strengths And Weaknesses: 1. This paper is well-written and easy to follow. 2. The studied problem is interesting and important. Other Comments Or Suggestions: NA Questions For Authors: 1. What is your DSL grammar. 2. Why only report the best point for the proposed appraoch? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful and constructive feedback. **Q1: What is your DSL grammar?** In the revision, we will include a complete description of the DSL syntax in the Appendix, covering its constructs for task placement, memory allocation, layout specification, and index mapping, as shown below. ``` Terminals: TaskName, RegionName, var, int Grammar Rules: Program → Statement+ Statement → TaskMap | DataMap | DataLayout | FuncDef | IndexTaskMap TaskName var TaskMap → Task TaskName Proc+ DataMap → Region TaskName RegionName Proc Memory+ Proc → CPU | GPU | OMP Memory → SYSMEM | FBMEM | ZCMEM | SOCKMEM DataLayout → Layout TaskName RegionName Proc Constraint+ Constraint → SOA | AOS | C_order | F_order | Align == int FuncDef → def var(var+): FuncStmt+ FuncStmt → var = Expr | return Expr Expr → var | var(Expr+) | Machine(Proc) | Expr.Expr | Expr Op Expr | (Expr) | Expr[Expr] | *Expr | Expr ? Expr : Expr ``` **Q2: Why only report the best point for the proposed approach rather than full optimization curve?** Please kindly note that we report both the best and average optimization trajectory over 10 iterations across 5 runs in Figure 4. Also, we want to clarify that reporting the best-performing mapper is appropriate in our context. Mapper optimization is an offline process, and in practice, it is standard to run the optimizer multiple times and deploy the best result. Once identified, the mapper can be reused across repeated executions on the same application, input, and hardware, incurring no further search cost. **Q3: Is the DSL expressive enough to cover all mapper code generation problems?** Our DSL is designed to express a wide range of high-performance mapping strategies, including all of the most important decisions. While there may be cases where certain optimizations are not directly expressible, we have not encountered any. Despite being more constrained than general-purpose C++, the DSL has been proven to be effective: all mappers discovered by our agent that outperform expert-written C++ implementations are expressible within the current DSL. **Q4: Server specifications and application information are mentioned as inputs, but details are missing.** We will clarify these details in the revised version. The server specification includes the number of GPUs and CPUs, and whether the OpenMP runtime is enabled. The application specification includes the list of tasks defined in the application and the data regions accessed by each task. These inputs define the structured search space explored by the agent during optimization. We will also include an example of such input specifications in the revised version for completeness. **Q5: The authors present Code Generation Success Rates (Section 5.2). However, in the main evaluation (Section 5.1), only the performance results are reported.** In the main evaluation (Section 5.1), we focus on measuring end-to-end performance, which includes both the correctness and performance of generated mappers. If the generated code has any syntax or runtime issues, its throughput is recorded as 0. Therefore, the performance numbers in Section 5.1 implicitly reflect code generation success, i.e., incorrect mappers yield zero performance. Section 5.2 isolates the code generation aspect to better analyze the effects of the DSL on LLM generation success. This section does not aim to evaluate performance directly but rather investigates how often the LLM produces syntactically and semantically correct mappers given natural language descriptions. It complements the main results by demonstrating that using the DSL significantly improves generation success compared to C++, which underpins the performance improvements seen in Section 5.1. **Q6: Are all random mappers correct?** No, not all random mappers are correct. For each application, we generate 10 random mappers by sampling from the full DSL-defined search space, totaling 90 mappers across 9 applications. Among them, 74 (82.2%) raise runtime errors due to invalid mapping decisions. The runtime system enforces correctness by rejecting such mappers during execution, resulting in a throughput of zero.
Summary: The paper introduces an innovative framework aimed at automating the process of optimizing parallel program performance using large language models. The proposed system employs a Domain-Specific Language to simplify the generation of mapping code and uses a mechanism called AutoGuide to turn raw execution feedback into actionable insights for the optimization agent. This system leverages generative optimization to find high-performance mappers efficiently, achieving superior results compared to existing tools like OpenTuner, even after fewer iterations. Claims And Evidence: 1. The experiments are conducted on a single-node setup with specific hardware. Whether the approach can achieve similar results on larger, more heterogeneous systems with diverse hardware remains questionable. 2. Average performance across multiple runs is reported, but reports on other percentile of performance are missing. This makes it hard to judge the consistency and reliability of the speedup. Additionally, how is the tail performance being impacted? Methods And Evaluation Criteria: Yes Theoretical Claims: This paper focuses on experimental validation and does not include formal proofs for its theoretical claims. Experimental Designs Or Analyses: The authors evaluate their approach using nine established benchmarks from the Legion framework, which is appropriate for assessing parallel program performance. A few potential issues: 1. Experiments on a single-node hardware design 2. Performance measured only on average is not sufficient, tail performance should also be reported Supplementary Material: All parts of the supplementary material are reviewed Relation To Broader Scientific Literature: The paper advances parallel programming by synthesizing high-level DSL design, reinforcement learning autotuning, and LLM-driven code generation, incorporating natural language feedback and a modular, agent-based approach. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Strengths 1. The system handles complex parallel computation systems and scales efficiently with larger programs and datasets. 2. The DSL abstracts away the complexities of low-level programming, making it easier for LLMs to generate correct mapping code and improving code generation success rates. ### Weaknesses 1. The current system works best for parallel programs that align with the DSL’s design. For non-standard or highly unique system architectures, the framework may need further customization. 2. While the AutoGuide mechanism is powerful, it still relies on raw execution output which might not always provide the necessary insights, especially in cases of complex or obscure errors. 3. Performance metrics could be extended. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review! We address the concerns below: **Q1: Performance metrics could be extended** We appreciate your suggestion and welcome the opportunity to clarify our evaluation methodology and expand the reported statistics. In our setting, reporting the best result across multiple runs is appropriate, as the best mapper is the one that is desired by the user. Mapper search is an offline optimization process, and it is feasible to run the optimizer multiple times to select the highest-performing mapper. Once identified, this mapper can be reused without incurring additional search cost, as the deployment scenario (application, input, and hardware) remains fixed. That said, we agree that additional statistics provide a more complete picture of performance variations. In the revised version, we will include the mean, standard deviation, worst, median, and best normalized throughput across five runs for each benchmark. The extended results are as follows: ### Our Framework (Normalized Throughput) | Benchmark | Mean | Std Dev | Worst | Median | Best | |------------|--------|---------|--------|--------|------| | Circuit | 1.33× | 0.01 | 1.31× | 1.33× | 1.34× | | Stencil | 1.01× | 0.01 | 1.00× | 1.01× | 1.02× | | Pennant | 1.03× | 0.02 | 1.00× | 1.03× | 1.04× | | Cannon | 1.09× | 0.00 | 1.08× | 1.09× | 1.09× | | SUMMA | 0.86× | 0.48 | 0.00× | 1.07× | 1.09× | | PUMMA | 0.57× | 0.55 | 0.00× | 0.66× | 1.09× | | Johnson | 0.98× | 0.17 | 0.68× | 1.06× | 1.07× | | Solomonik | 0.52× | 0.41 | 0.00× | 0.61× | 1.09× | | COSMA | 1.25× | 0.03 | 1.23× | 1.23× | 1.31× | We additionally report OpenTuner results. ### OpenTuner (Normalized Throughput) | Benchmark | Mean | Std Dev | Worst | Median | Best | |------------|--------|---------|--------|--------|------| | Circuit | 0.97× | 0.16 | 0.81× | 0.99× | 1.20× | | Stencil | 0.00× | 0.00 | 0.00× | 0.00× | 0.00× | | Pennant | 0.00× | 0.00 | 0.00× | 0.00× | 0.00× | | Cannon | 0.00× | 0.00 | 0.00× | 0.00× | 0.00× | | SUMMA | 0.00× | 0.00 | 0.00× | 0.00× | 0.00× | | PUMMA | 0.00× | 0.00 | 0.00× | 0.00× | 0.00× | | Johnson | 0.00× | 0.00 | 0.00× | 0.00× | 0.00× | | Solomonik | 0.00× | 0.00 | 0.00× | 0.00× | 0.00× | | COSMA | 0.00× | 0.00 | 0.00× | 0.00× | 0.00× | Our method achieves relatively stable performance across most benchmarks. The higher variance and occasional 0.00× worst-case throughput observed in SUMMA, PUMMA, and Solomonik are due to invalid mapper configurations in the search space (e.g., violating cuBLAS layout constraints). The runtime enforces correctness by rejecting such configurations during execution. While the generative optimizer typically learns to avoid these cases through the AutoGuide mechanism, occasional failures within the 10-iteration budget are still possible. In practice, such failures can be mitigated by repeating the optimization and selecting the best-performing mapper. In contrast, OpenTuner, despite running the same number of iterations, fails to generate valid mappers for 8 out of 9 benchmarks. This highlights the difficulty of exploring the search space using traditional reinforcement learning methods. **Q2: Experiments on multi-node systems** Thank you for raising this point. Our system supports multi-node execution, as this capability is already provided by the underlying runtime system. There is no fundamental technical limitation in our approach that prevents generalization to multi-node systems. The only technical issue we encountered in scaling up our experiments is an engineering issue, not a limitation of our method. Specifically, the cluster we currently use permits interactive execution on a single node, but multi-node runs require submitting jobs through the SLURM scheduler and waiting for resource allocation and job execution. While this can be addressed through SLURM-specific customization, it is orthogonal to the core contributions and novelty of our system. For users with direct multi-node access (e.g., without queueing), our system runs seamlessly without modification. We plan to extend our evaluation to larger-scale experiments as part of future work.
null
null
null
null
null
null
null
null
Algorithm Development in Neural Networks: Insights from the Streaming Parity Task
Accept (oral)
Summary: This is an interesting work that studies the development of algorithms in RNN. It combines theory and experiments, which I think is a great plus. The theory and experiments are novel, showing how representations become merged in a linearized dynamical theory Claims And Evidence: The claims are validated, and I find that the experiments are sufficient Methods And Evaluation Criteria: Good Theoretical Claims: Nothing too problematic, even though the theory may be too simple -- it is a simple quadratic loss function! Experimental Designs Or Analyses: I am a little confused with the result in Figure 4. a) at initialization, why are there so few states? Here, all the weights are random and so all the states should be different, right? I would expect a lot of states here b) how do you count the number of states? Does it require both representations to be **exactly** identical? Or some approximate identicality is sufficient? Supplementary Material: I took a quick look Relation To Broader Scientific Literature: NA Essential References Not Discussed: I think the references are fine -- maybe the work should acknowledge and discuss its limitation more. For example, noise and/or regularization are known to be a key factor of representation learning, but the proposed theory ignores both factors: https://arxiv.org/abs/2410.03006 Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: See my answers above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the feedback and spending your time reviewing our paper. Please find below responses to your comments and the changes we will make to the paper. ***Experimental Designs Or Analyses*** - The RNN was initialized at small weights, which is why we see few states at initialization in Figure 4. Due to the small weights, the hidden states are initially all close. We will make this clearer in the figure's description. When we rerun the experiment at a higher initialization scale (but still in the generalizing regime), we see more states at initialization. These quickly disappear during training, and the rest of the behavior is the same in this case as seen in Figure 4. - We used approximate identity, since due to numerics two representations never end up overlapping exactly. We merge two states when their representational distance is below a threshold. The threshold was set at 0.01 of the representational standard deviation during the entire training procedure. Varying the threshold around this scale did not significantly affect results. ***Essential References Not Discussed*** We agree the paper should make it more clear that noise and regularization can still play important part in representation learning for many settings. We will add this to the paper and cite relevant literature.
Summary: The authors provide an in-depth analysis for an RNN solving the Streaming Parity Task. Specifically, they extract a computational graph from the network at different training phases. Once this graph becomes cyclic, the network can generalize to longer times. An analytical treatment of how learning dynamics affects nearby representations explains some of the observed properties ## After rebuttal After reading all the rebuttals and discussion with all reviewers, I am keeping my score. Claims And Evidence: Overall yes. There is one point which seems at odds with the results. In line 155, the authors write “As the model trains, the automaton expands into a complete tree that correctly fits the training data, reducing the loss only on the training data. After that, states in the automaton appear to merge until it becomes finite, at which point it generalizes on all data.” From reading this, one would expect the training loss to be close to zero – while the states keep on merging and then there is a large drop in generalization without a big change in training loss. In contrast, Figure 6 shows that the training loss hardly changes until dropping almost together with the generalization loss. Methods And Evaluation Criteria: Yes. The tasks chosen are all described by automatons, allowing to check whether state mergers can lead to finite automatons and thereby generalization. Theoretical Claims: I read all the proofs. Did not check every math step in detail. The methods are very similar to van Rossem & Saxe 2024. Experimental Designs Or Analyses: Yes. The task is well suited to the problem at hand, and the training and automaton extraction seem valid. Supplementary Material: I read all the supplemental material. Relation To Broader Scientific Literature: The mathematical part is very similar to van Rossem & Saxe 2024. If I understand correctly, the main difference is defining h_1 and h_2 as individual functions, instead of optimizing Dh. This choice should be explicitly stated and motivated in the text. The application of the theory to algorithm development is novel to the best of my knowledge. Essential References Not Discussed: Two papers that come to mind in the context of automaton extraction are: Turner, Elia, Kabir V Dabholkar, and Omri Barak. “Charting and Navigating the Space of Solutions for Recurrent Neural Networks.” In Advances in Neural Information Processing Systems, 34:25320–33. Curran Associates, Inc., 2021. Brennan, Connor, Adeeti Aggarwal, Rui Pei, David Sussillo, and Alex Proekt. “One Dimensional Approximations of Neuronal Dynamics Reveal Computational Strategy.” PLOS Computational Biology 19, no. 1 (January 6, 2023): e1010784. https://doi.org/10.1371/journal.pcbi.1010784. Other Strengths And Weaknesses: Finite automaton (line 160) – because the network is finite, there is a finite number of states by definition. To show that the automaton is infinite before the transition, one would need to see how the number of states scales with the size of the network and the duration of the training sequences. What determines the timescales (tau_h, tau_y)? Definition of w is a bit confusing because it seems like dy should equal y2-y1. Perhaps it’s worth emphasizing that dy is the network’s output and y2-y1 is the difference in labels. Line 1175 wether – whether Figure 14 – The match between theory and experiment is not very convincing. Relation to neural collapse, Farrell. NTK and others. Tunnel Line 194 – “this assumption is true for the training data”. What happens when x1,x2 are of different length? Because the training set is length 10, there isn’t a match between any residual sequences. The definition of subsequence vs. initial sequence is not entirely clear. Equation 49 – what is the intuition / justification for this? There is no direct comparison of dynamics (dh,w…dy) to data. (Similar to figure 4 in van Rossem & Saxe). Figure 4 :loss type and secondary y axis missing Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the feedback and spending your time reviewing our paper. Please find below responses to your comments and the changes we will make to the paper. ***Claims And Evidence*** The phrasing of the sentence in line 155 is not clear enough, so we will amend it to avoid confusion. What is meant by this is that initially only the training loss decreases, even though the generalization loss remains fixed. We can see in Figure 6 that the generalization loss is completely fixed until around 550 epochs, whereas training loss already can be seen in the plot to be decreasing at around 250 epochs, albeit very slowly initially. By the time the training loss gets close to zero the generalization loss starts to drop, although this happens almost immediately, with no delay in between. ***Relation To Broader Scientific Literature*** Correct, the only difference between the interaction model discussed here in 3.2 and that in van Rossem & Saxe 2024 is replacing the linearized representation map with two optimizable vectors. This difference is due to the consideration of an RNN as opposed to a feedforward network. In a feedforward network we can smoothly vary the inputs for the map that assigns representations, and can thus take a local approximation. For an RNN we cannot do this as easily as the hidden map will depend on multiple input symbols. But since these are discrete we can instead consider representations for different input sequences as each separately optimizable vectors, as different sequences cannot get arbitrarily close in the input space. We will mention this contrast explicitly in the text. ***Other Strengths And Weaknesses*** Note that the network used here is quite overparameterized compared to the task. The RNN has 100 hidden units, which is very high dimensional considering a representation solving the task can in theory be constructed in one dimension. We did not see any issues related to network size limitations in the setting we studied. We found that we got consistent results for finiteness of the automata, using a large enough test set, with long enough sequences. The timescales $\tau_h$, $\tau_y$ are depend on the architectural details of the map assigning representations and the map assigning outputs to those representations respectively, which have been abstracted away in the theoretical model. They cannot be determined from theory, only empirically by studying trajectories. We will change the notation to more clearly distinguish between predicted outputs and target outputs. In Figure 14 we see that for both theory and experiment the drop in number of states in the second phase is solely due to agreeing pairs. This is the main point of the figure. Besides that there is indeed not much of a match, and it the figure does not explain much else. We will state this more clearly in the paper. The assumption in line 194 indeed does not hold in the training data for sequence pairs with different lengths and where only one subsequence will result in a sequence longer than 10. It still holds for most pairs of two input sequences and subsequence, but we should not say it holds for all. The better motivation for the assumption is that when there isn't a matching sequence that pair will not contribute to the interaction. We will change this in the text. The motivation for equation (49) is as follows: if we consider the map assigning a representation to a sequence of length $n$, this map consists of the recurrent map applied $n$ times. Suppose this recurrent map has $P$ parameters. The total map assigning the representation will then have $nP$ parameters, if we double count the same parameters when they show up multiple times. Applying gradient descent to this will result in a sum of $nP$ terms, thus we expect the effective learning rate to scale linearly with $n$. We will add this explanation to the paper, since the current one is too limited. There is no comparison of the two point dynamics because the simpler model from van Rossem & Saxe 2024 was not enough to capture the behavior here, see e.g. the initial divergence in Figure 10. The fixed expansion point interaction model can capture this behavior, but some of its parameters are hard to measure, as some of the quantities in equation (41) cannot be directly computed from the representation and prediction vectors. Additionally, it has far more freedom considering it has 4 instead of 2 effective parameters. Fitting it to the data would not be very demonstrative, as we would have enough freedom to fit any relatively simple curve such as seen here in the representational distance. --- Rebuttal Comment 1.1: Comment: Regarding the match in figure 14 - is there any simple setting in which there is a quantitative match between theory and simulations? --- Reply to Comment 1.1.1: Comment: We found that for the experiment on randomly generated regular tasks, sometimes the theory and experimental curves were very close, when all the settings in the theory, such as the effective learning rates and merging threshold, were set manually to the right values. This was not a very consistent result, however, not all randomly generated tasks gave a smooth enough curve to be able to fit it well. Perhaps when considering more complex tasks, these curves will become smoother and easier to fit, but we have not explored this any further.
Summary: This paper explores how neural nets learn automata through training, focusing on a parity task in RNNs, with a short foray into a modular arithmetic task in transformers at the end. They are able to (in a very satisfying way!) theoretically derive equations governing the merging of states in the RNN under some assumptions, which provides a number of useful intuitions for when and why such merging occurs, which explains their experimental results. This analysis shows that the learning occurs in phases, with an initial phase where many states effectively "memorize" distinct sequences of training data. This is followed by a second phase where states which constrain all future outputs similarly merge, corresponding the network internally instantiating a finite-state machine (though non minimal) that generalizes to infinite-length sequences. I feel compelled to also say that aside from the actual content - which I found interesting and thought provoking - this paper is extremely well-written and was a pleasure to read! Claims And Evidence: The main claims are that - State merging is what underlies generalization, which they show by interpreting/analyzing the internal states of the RNN and their transitions upon seeing another token of input as an automata. Through training they observe that particular distinct sequences of inputs increasingly cause internal RNN states that are close to eachother (ie. the states merge), and ultimately the number of visited internal states becomes finite, corresponding to infinite-lengthscale generalization. This is a convincing example of their claim, though it is of course a single relatively simple example. To be more specific about this limitation, the tasks explored in this work were all regular, and it's not obvious to me what would happen in the case of e.g. parenthesis balancing or Dyck-languages or other irregular things. Similarly, what occurs in the more general case of probabalistic automata? Still, it's very illustrative and coupled with the local interaction theory is convincing. - A local interaction theory explains why and when states merge over training. The setup of the theory explores how the distance between two hidden states associated with two distinct sequences of input changes over gradient descent. The theory shows a number of deep intutions - that states merge when they are associated with the same constraints over all future outputs, that states associated with short inputs won't merge, and that there's a dependence on initialization - all of which is born out in experiment. They perform analysis on their experimental RNN on these points (fig. 6 and 7 and 9). Though there are some assumptions made to make the theory, they discuss them and say them explicitly, so that all seems kosher. Methods And Evaluation Criteria: Yes, I find the methods good, and appreciate that they tested on random automata in the appendix. Regarding the transformer experiments towards the end of the paper - I am unsure why they didn't study the modular addition task which has been shown to have the type of sudden phase-transitiony/grokking behavior that their parity RNN has. Instead they studied a modular subtraction task in a network which showed a smoother transition to generalization. I wonder if the results would bear out in a more clean way in the setting of more sudden grokking as their main example had in their RNN. In general I believe there is a lot more work that could have been done on the transformer - but that is reasonably beyond the scope of the current work (and I look forward to seeing it at some later date). Theoretical Claims: Yes, I found the theoretical work elegant and also was explained clearly. Experimental Designs Or Analyses: I thought their main RNN experiments, and the experiments on random automata were designed and carried out well. Supplementary Material: Supplemental looked good. Relation To Broader Scientific Literature: This is a strong contribution to the field of interpretability in neural networks. Arguably, the entire issue of interpretability can be framed as figuring out the fundamental link between the continuous (up to floating point precision) dynamics of neural networks and the algorithmic and symbolic nature of what we often think of as performing computation (a similar idea applies to neuroscience). This paper does a good job of pretty directly attacking this fundamental issue. There is a long history on this topic, goign back at least to Essential References Not Discussed: I thought the references were reasonably complete, and I appreciated the connection to the neuroscience literature - but just to name some other refs that came to mind the authors might be interested in (I don't necessarily think these need to be included): - Cleeremans, Axel, David Servan-Schreiber, and James L. McClelland. "Finite state automata and simple recurrent networks." Neural computation 1.3 (1989): 372-381. Classic work on a similar topic. McClelland has other similar work that more directly looks at fixed point structures in RNNs and interprets them as states of an automata but I can't find that work right now. - Shai, Adam, et al. "Transformers represent belief state geometry in their residual stream." Advances in Neural Information Processing Systems 37 (2024): 75012-75034. In that paper they show that the internal states of transformers represent the states of the probabalistic automata (they call it a mixed state presentation) that describes the prediction algorithm over some stochastic process. Similar to the inutition in the paper being reviewed, these states are also those that merge sequences of inputs that constrain the future in the same way. Other Strengths And Weaknesses: I've answered this in responses to other questions. Other Comments Or Suggestions: I think the plots in figure 9 would look cleaner if you showed fewer x and y marker labels - e.g. you could show 0.2, 0.4, 0.6, 0.8, 1 instead of every .1 interval. or even 0.25 interval Questions For Authors: How do you think the analysis, theory, and intuitions of this work apply to non-regular and probabilistic algorithmic tasks? Would the results of your transformer analysis look different if you analyzed a setting where obvious and sharp phase-transition grokking phenomenon occurs, like in the original Nanda et al work studying modular arithmetic? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for the feedback and spending your time reviewing our paper. Please find below responses to your comments and the changes we will make to the paper. ***Questions For Authors*** - The intuitions and theoretical model surrounding state mergers are independent of the task and not affected by averages, so these we would expect to see these in general. For a probabilistic regular task we expect everything to still work via a very similar argument as in the deterministic case. However, for a non-regular task a different mathematical object is needed to represent the learned internal structure of the network, and how representational mergers can result in the development of one is unclear. E.g. it is not possible for representational mergers alone to explain parenthesis balancing, as after all the possible mergers have occurred the automaton is still not finite and so there is no reason to expect generalization to longer sequences. We do wonder if perhaps other types of merging occur due to similar continuity arguments, perhaps in the patterns of maps within architectures more advanced than the RNN explored here. This is why we found the merging of attention patterns in transformers interesting, as it may be related to the development of some kind of mathematical object more advanced than a DFA, but this would require significant further study. - Note that Nanda et al. used weight decay, which we avoided to focus on induced bias in gradient descent. When we run the experiment on modular addition data without weight decay the transition is not as sharp as in Nanda et al., although it is still sharper than the subtraction task. We find qualitatively similar results as for subtraction, where the number of hidden states ends up increasing overall, and the number of attention patterns first increases and then decreases. We will add this plot to the paper. When using weight decay we get a sharper transition and find significant merging in both the attention patterns and hidden states, although studying regularization is beyond the scope of this work. --- Rebuttal Comment 1.1: Comment: Thank you for this response. I maintain my score of Strong accept.
Summary: This paper studies the learning dynamics in RNNs trained on a toy-task to understand what conditions influence generalization in RNNs. The authors first study RNNs trained on the streaming parity task, and group together RNN hidden states to construct Discrete Finite Automaton (DFA) proxy-models of the RNN. They study how the number of states in the constructed DFAs evolve during training and find that training starts with few states, then over-fitting each data-point with a tree of states. Finally, states merge until the DFA truly becomes finite, at which point generalization occurs. The authors attempt to explain the decrease, leading to generalization, by suggesting that for continuous models the continuity of the model will lead to nearby states merging because continuity will result in the same solution being found. The authors attempt to explain the increase of states, earlier in training, by suggesting that differing effective learning rates lead to states drifting apart. These explanations are justified by analyzing two simplified mathematical models for sequence learning via gradient descent. By way of experiments and simplified model analysis, the authors connect the generalization phenomenon with parameter initialization strength and dataset size. ## Update After Rebuttal Thank you to the authors for the many clarifications! I am mostly satisfied by their responses and have thus increased my score to a 4. I have chosen not to give the paper a 5 on account of the disconnect between the studied toy model (continuous functions of sequences) and the system of interest (RNNs), and some minor clarity issues (e.g. greater intuition for equations 3, 7). Claims And Evidence: The claims seem reasonable, except the reviewer would like to see greater discussion of the assertion that continuity is the key property that enables mergers. The reviewer wonders if it might be some other regularity or smoothness property that is crucial (see Question 2 of “Questions for Authors” section). Methods And Evaluation Criteria: Evaluation seems sound, but the reviewer did not review the full supplementary section. Theoretical Claims: I am concerned about the assumption made in Equation 16 and a potential error in Equation 17 (see bullet points 2 and 3 in “Other Strengths and Weaknesses” section). Experimental Designs Or Analyses: Experimental design and analysis appear sound Supplementary Material: Appendix A.1, A.2, most of B.1, and D. Relation To Broader Scientific Literature: The reviewer is less familiar with the Discrete Finite Automata (DFA) literature so is ill positioned to discuss this. Conversely, the reviewer is familiar with literature on analysing learning in RNNs. To the reviewer's understanding, much of the recent work on how RNNs learn has focused on computational analysis of internal dynamics (Ostrow et al., 2024, NeurIPS), analysis of linear RNNs (Zucchett & Orvieto, NeurIPS, 2024; Li et al. JMLR, 2022), or analysis of very simple, non-optimal, learning rules (Clark & Abbott, Phys Rev X, 2024). From the reviewer’s point of view, this work is novel because it uses DFA coarse-graining to study the RNN representations algorithmically (although, again, the reviewer is less familiar with DFA literature so such analysis may have appeared there), and because the analytical results deal with a hyper-simplified model–casting aside RNN architecture altogether–and seem to provide some insights that still apply to learning in RNNs. Essential References Not Discussed: The reviewer cannot immediately think of any missed references. Other Strengths And Weaknesses: ## Strengths The paper is nicely written and the approach of coarse-graining with DFAs is very cool and provides great insight into RNN learning dynamics on the streaming parity task. The authors did a great job with study design, to have selected a mathematical model that is so highly simplified but still appears to yield relevant insight. Lastly, the plots are visually appealing and insightful. ## Weaknesses The reviewer is primarily worried about some of the mathematical assumptions made in the paper (first two bullets below), and also about a potential mistake in Equation 17 (last bullet below): - The reviewer is concerned about the closeness of hidden states assumptions (see Q1 of “Questions for Authors” section). - In equation (16) the authors make the assumption that the difference in hidden state representations follows simple linear dynamics (specifically, that the time derivative is proportional to the state value). Given the coupling with the $D_{y_i}$ variable derived in equation (14), this appears to be a not-insignificant approximation. As such, it would be nice to see some justification for this, along with a mention of this in the main body of the manuscript. - The reviewer was unable to get from line 6 to line 7 in equation (17). It seems that the above linear assumption is being used here but, unless the reviewer is missing something, there appears to be an error where the second term in equation 7 is different from what it should be by a constant factor. On the whole the reviewer likes the paper, but cannot recommend it for acceptance until their concerns are addressed. If these and the below comments + questions are satisfactorily addressed the reviewer would likely increase their score. Other Comments Or Suggestions: - It would be nice if the authors could discuss their choice of gradient flow as learning dynamics versus something closer to stochastic gradient descent–perhaps a discussion of pros and cons of this choice. - Many of the figures, in the main body and the supplementary section, are missing a Y-axis label. Please add! - Line 138 LHS: “them same” => “them the same” - Line 119 RHS: “As an example for illustration, suppose that two sequences in the dataset agree on target outputs, and one already has the correct predicted output.” The reviewer found this a little confusing. How is it that they can agree on targets but not predicted output? Is the target not what is being predicted? Perhaps this is a problem with the reviewer’s comprehension but some clarity here could be useful. - It would be really helpful to have some intuitive description, if possible, of the dynamics in Equation 3. - Some insight into the derivation of Equation 7 would be great. - Line 305 LHS: “that the first pairs to merge are the ones for which $n - m$ is minimal.” Isn’t Equation 8 a statement about the system at a fixed point? If this is true, why would it explain which pairs merge earlier or later? - Line 350 RHS: could be good to give a quick definition of what “Regular Random” means. - Line 436: suggest: “mathematical structures” => mathematical objects - Line 671: a definition of “accepting state” could be good - Line 693: reference for Hopcroft’s algo is missing - Equation 14 lines 8-9: it seems the $2$ in the denominator should be a $4$. - Notation in section B.1: it could be nice to use something to distinguish the target $y$ values from the predicted ones. - Line 814: not sure there is supposed to be a 2 in the denominator for $\frac{1}{\tau_h}$ - Could be nice to elaborate on how Equation 19 was derived - Equation 25 and 26: might suggest different notation for determinant and trace, given similarity with previously used symbols. Questions For Authors: 1. Do the authors have insight about how the learning dynamics in a more detailed model would be different from their parameterizing the output maps directly with the parameters of the Taylor expansion? 2. Much of the theory rests on the assumption that the RNN states observed in practice are close enough together for the Taylor expansion of Equation 2 to hold. In Fig.10 (Right), and similarly in Figs 15, 16, it seems that–before merging–the merging states actually get rather far apart from each other (looking at the blue trace). Does this not invalidate the closeness assumption? Relatedly, do all states usually get far apart like this before merging, or do some stay close together during the training dynamics? 3. The studied toy dynamical system model seems to provide an argument for the sufficiency of continuity, but perhaps not for the necessity of continuity for this phenomenon. Can the authors think of ways that they could test a discontinuous map (or at least an approximation of such), to determine the necessity of continuity? The reviewer wonders if it might be some form of regularity, rather than the continuity per se, that is the important quality here. 4. In supplementary figure 16 it doesn’t seem like the initial weight scale has much of an effect on “Number of States” or “Merging” with a Tanh, in comparison to the ReLUs used in the paper. How do the authors explain this? 5. Part of the authors’ motivation for the work is from a neuroscience perspective. However, the spike coupling between neurons is often viewed as discontinuous. Do the authors believe that the intuition provided in this paper would not apply to spiking networks? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the feedback and spending your time reviewing our paper. Please find below responses to your comments and the changes we will make to the paper. ***Weaknesses*** - We are unsure if this is related to the question about equation (16), but for clarity we would like to note that in equation (16) we mean to assume that $\frac{d}{dt}dh$ and $dh$ are proportional during training only as vectors. They still differ by a time-dependent proportionality coefficient. Complex behavior can happen within this coefficient and $dh$ does not have linear dynamics. We can see this as some of the results, such as a transition between a rich and lazy regime in equation (4) are not present for linear dynamics. We will change the text to make this point clearer.\ We will also add a mention of this assumption in the main body. In our view, however, it is does not change the bigger picture. We are only restricting to a set of solutions that we can easily solve, which are still valid solutions to the system in equation (14). Solutions that move towards or away from each other, are, due to the importance of mergers in this paper, precisely the ones we would like to study. Additional solutions may exist, where representations spin around each other, but this is less relevant to the formation of finite automata. - There are a few steps between line 6 and 7 and the appendix will be changed to explain these in more detail. We use the Ansatz $\frac{\mathrm{d}}{\mathrm{d}t} dh \propto dh \implies \frac{\frac{d}{dt}dh}{||\frac{d}{dt}dh||}=\frac{dh}{||dh||}\implies \frac{d}{dt}dh=\frac{||\frac{d}{dt}dh||}{||dh||}dh$. This allows us to rewrite the second term in line 6 from equation (17): $D_{y_i}\frac{d}{dt}dh=\frac{||\frac{d}{dt}dh||}{||dh||}D_{y_i}dh=\frac{\frac{||\frac{d}{dt}dh||}{||dh||}||dh||^2}{||dh||^2}D_{y_i}dh=\frac{dh^\top (\frac{||\frac{d}{dt}dh||}{||dh||}dh)}{||dh||^2}D_{y_i}dh=\frac{dh^\top\frac{d}{dt}dh}{||dh||^2}D_{y_i}dh$, where we used the previous relation twice. ***Questions For Authors*** 1. One can write exact equations of the pair's dynamics by replacing the effective learning rates with time-varying tangent kernels. These depend on the parameterization and thus the architecture. The rest of the equation's structure still is the same. By replacing the kernels with constants, the architecture-independent part of the dynamics could be examined on its own. In a more detailed model we would still see this part of the equation and therefore expect similar merging results, but the kernels may add highly complex behavior. 2. The representational distance in the plots 10, 15, 16 are normalized in order to compare the patterns, as distances for the merging pair are much smaller than the diverging pair. The merging pair was reduced by about a factor 100. This is not clear from the figure's description and we will change this.\ We believe the observation that additional freedom in the model can result in initial divergence patterns is interesting on its own, as it offers an explanation to the tree fitting phase. However, the paper could do better discussing the accuracy of the approximation in the experiment. Even though the representational distance is small this does not guarantee the approximation holds. We ran the experiment and tracked the values of the first and second order term in the Taylor expansion for 100 randomly selected pairs of hidden states. We found that on average for pairs that merge the first order term always dominates by at least a factor of 10. We will add this to the paper.\ From what we have seen all merging pairs move apart before merging with a similar pattern. 3. For the state merger intuitions to hold, nearby hidden states moving closer must result in their respective predictions also moving closer. Continuity provides this, but a weaker condition may be enough. If, for instance, the model is discontinuous but still on average predictions move closer when their hidden states do, the intuitions still make sense. In a neuroscience context this may be an interesting point to mention, so we will add a discussion in the paper. When adding a discontinuous jump in the output map we find a similar automaton development pattern, albeit with noisier dynamics. 4. The scale $G$ increases with the initial weight scale, but the rate at which it does will depend on the architectural details. Since a hyperbolic tangent non-linearity saturates and a ReLU does not, how $G$ and thus the occurrence of mergers depends as a function on the initial weight scale may be qualitatively different. In particular for a tanh model near the saturating regime one may expect less dependence on $G$ compared to $N$ which is not affected by architectural choices. 5. As mentioned in question 3, discontinuity is still okay as long as on average there is a local relationship between the representations and their predicted outputs. In the case of spiking networks the intuitions may still apply to the firing rates. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the detailed response! The reviewer is satisfied with the authors' rebuttal of the *Weaknesses* and *Questions for Authors* section and has raised their score to a 3 accordingly. If the authors sufficiently address the *Other Comments or Suggestions* section in the original review the reviewer will further raise their score to a 4. --- Reply to Comment 1.1.1: Comment: We were unable to go into detail on the "Other Comments Or Suggestions" section before due to character limit, but the suggestions are very much appreciated and we intend to make changes to improve clarity of the paper. - We chose gradient flow in our modeling to avoid introducing any form of noise. This is to demonstrate that noise is not a requirement for this form of generalization. We will mention this more explicitly in the text. - We will add a y-label to every plot that is currently missing one. - We will change "them same" in line 138 to "them the same". - The example in line 119 is meant to occur at some point halfway during training, where perhaps some datapoints are already fitted correctly, but others are not. This is not entirely realistic, as in practice predictions are learned simultaneously, but it can still be illustrative to consider as it may help us understand the effect of one representation on another one nearby. We will change the text to make the setting we are considering clearer. - From the first line of equation (3) we can see that $\langle w_i \rangle_i$ controls the velocity of $||dh||^2$. From the second line we see that the velocity of $\langle ||dy_i||^2 \rangle_i$ is also proportional to $\langle w_i \rangle_i$, although modulated by a positive factor. This means that $||dh||^2$ and $\langle ||dy_i||^2 \rangle_i$ will move in the same direction until $\langle w_i \rangle_i$ decays to zero, at which point they will converge at the same time. A lot of the terms in the third line do not seem to have a clear interpretation, so it is hard to give a very complete intuitive description. In numerical solutions what we see is $\langle w_i \rangle_i$ decaying to zero, but depending on the initialization sometimes it first overshoots zero. This results in $||dh||$ and $\langle ||dy_i||^2 \rangle_i$ changing direction at some point before convergence. When the target outputs agree we see exponential decay. - There isn't much of a derivation of equation (7), but we can make it clearer by explaining in the text more what is meant by $G$. When applying the recurrent map at initialization it will decrease representational distances by some factor $G$. Thus distances between sequences of length $n$ at initialization will on average scale as $G^m$ with $G$, where $m$ is the length of the sequence corresponding to the hidden state, and output prediction distances with scale as $G^{n+m}$ where $n$ is the length of the subsequent sequence. - In line 305 by first we mean the first pairs when one decreases the initialization, not first during training. This is unclear and we will change "first pairs to merge" to "first pairs to merge when decreasing the weight initialization". - We will add an explanation of how random regular tasks were generated to the paper. - We will change "mathematical structures" in line 436 to "mathematical objects". - We will elaborate more on the definition of a DFA and accepting states. - We will add a reference for Hopcroft's algorithm. - The $2$ in equation (14) should indeed be a $4$ and we will change this. It does not affect any of the results as this factor of 2 can be absorbed into the arbitrary constant $\frac{1}{\tau_{y_i}}$. - We will change the target value notation from $y_{\alpha,i}$ to $y_{\alpha,i}^*$, to distinguish it from predictions. - As far as we could find there is no calculation error in equation (17) so the factor 2 in line 814 should be there. We will adjust the definition of $\frac{1}{\tau_h}$ to remove it anyway because it will make the equations look nicer, due the other added factor 2 from equation (14). - The relationship in equation (19) was a guess by looking at the form of equation (18) and not derived from anything. - We will change the determinant and trace notations to $\text{Tr}(J)$ and $\text{det}(J)$.
null
null
null
null
null
null
Using Unsupervised Dynamic Feature Selection to Enhance Latent Representations
Reject
Summary: This paper is concerned with feature selection for unsupervised learning. Feature selection is the act of selecting a subset of observed variables to either improve interpretability, reduce overfitting/improve performance, or reduce computation costs. In this work, authors propose a differentiable feature selection module to be placed before any unsupervised architecture. The proposed module is trained to output scores ($\in [0,1]$) for each input variable and based on this score a binary mask is constructed. The hyperparameter M is added to the pipeline and defines the number of input variables that are selected. The authors propose an alternative to a simple sigmoid output for the proposed module, inspired by prior work, and propose to enforce the constraint of M "activated" variables not throughout the learning process but sparsely. The method is tested on clustering and world model training. For clustering, the authors test multiple image datasets and compare them against a wide range of methods. Results show a similar performance of a slight performance increase. For world models, authors test one dataset and compare it against a foundational world model. For world models, they adapt the proposed architecture to ensure that the input dimensionality of to the world model is reduced. They show strong performance increase on both image reconstruction and action modelling. ### Update after rebuttal: My score reflects my opinion on the quality of this paper, the empirical results seem convincing enough. The concern relies mostly on the additional costs that the method brings combined with concerns about the relevance of feature selection methods when dealing with larger datasets. Claims And Evidence: Claims are that : - the authors propose a novel method for unsupervised feature selection - the method is flexible and has reduced memory consumption - the method improves the quality of representations (tested on clustering and world modeling) Evidence: - the method is different from prior work even though the fundamental concepts behind the proposed approach are not novel (learning a mask over the input data) - the method does not change the data size and as such is more flexible than prior work. Feature selection modules increase the memory consumption but it does seem like this method increases it less than prior work. - the evaluation gives a good signal on the benefits of the approach, especially on world modeling, see sections below on how to strengthen the evaluation. Methods And Evaluation Criteria: Methods: - the proposed module makes sense given the problem at hand, it proposes to learn a mask and proposes slight adjustments to the way the objective is used to learn this mask with the use of the hard concrete gate and adjusted training procedure. Evaluation: How the method can improve interpretability, performance, and computational costs are what I believe should be evaluated to confirm the benefit of the approach. Authors show the performance improvement in clustering and world modeling. Results on clustering encompass multiple baselines and datasets with benchmark metrics. For world modeling, adding additional datasets would improve the robustness of the conclusions. Besides, the evaluations seem well-conducted. Theoretical Claims: There are none. Experimental Designs Or Analyses: Yes, I did check. As mentioned before, I believe the world modeling section of the empirical evaluation could be extended notably with additional datasets. Additional experiments could be added to improve the robustness of the evaluation (see questions below). Besides, the evaluations seem well-conducted. Supplementary Material: No Relation To Broader Scientific Literature: I believe the field of research tackled by the authors (i.e., feature selection) is related to methods aiming at _learning_ a mask in the field of Masked Image Modelling. While I don't think authors would compare against these methods maybe a mention of this related field in the related work section would improve the context set by authors. [1] Zhaowen Li, Zhiyang Chen, Fan Yang, Wei Li, Yousong Zhu, Chaoyang Zhao, Rui Deng, Liwei Wu, Rui Zhao, Ming Tang, and Jinqiao Wang. MST: Masked self-supervised transformer for visual representation [2] Ioannis Kakogeorgiou, Spyros Gidaris, Bill Psomas, Yannis Avrithis, Andrei Bursuc, Konstantinos Karantzalos, and Nikos Komodakis. What to hide from your students: Attention-guided masked image modeling. [3] Yuge Shi, N. Siddharth, Philip Torr, and Adam R. Kosiorek. Adversarial masking for self-supervised learning. Essential References Not Discussed: See relation to broader scientific literature above. Other Strengths And Weaknesses: Strengths The paper proposes quite a simple idea, not groundbreaking but well-motivated, executed, and explained. The paper is well-written and easy to follow. With some additional empirical work done during the rebuttal, I would be happy to increase my score. I think the empirical setting could be extended but already looks convincing as authors explore settings used in practice: clustering and world models and in the later show some improvements. Authors also kept the practical use of their method in mind by providing guidelines on which hyperparameters to use in practice (lines 425 onward) Weaknesses The paper tackles feature selection as opposed to feature extraction, as explained at the beginning of the paper. This leads to the need to train an additional module to select a subset of features to then either improve interpretability, performance, or subsequent computational costs. I think these additional costs caused by the training of the "mask learning module" should be more thoroughly discussed as it could limit the appeal of the approach (see suggestions and questions below). From a performance point of view, the proposed method does not learn any new feature but selects a subset of existing features thereby preventing limiting the overfitting of the model. I wonder how much this becomes useful as a function of the number of training samples and as such whether we can really see a benefit when increasing the sample size. Again see questions below. Other Comments Or Suggestions: Interpretability I think an interesting additional experiment would be to propose an analysis of the features selected so that we can qualitatively/visually assess the features selected by the additional module. It would improve help further validate the proposed method. Questions For Authors: - could authors give some insights into the features selected with this approach in both the datasets used for clustering and world models? - can authors show an ablation across sample size? - could authors provide figures for the performance vs. training time and performance vs. number of parameters for unsupervised methods with and without DDS ? - can authors elaborate on the impact of M and provide ablation studies across this hyperparameter? Would be nice to show how the clustering performance/reward varies? the optimal is probably dataset-dependent and as such would require some hyperparameter tuning. - I did not really understand the justification for the "perceptual loss" in section 4.2. - why are tables 1 and 2 distinct ? are settings different or are the datasets explored simply different? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments. To respond to the reviews received, we have added several new results to the paper, as well as revised important parts of the text. Below is a detailed response to your queries: ``` can authors show an ablation across sample size? ``` - We have performed an experiment on the Stanford Cars dataset, using the same network for all configurations. The results are shown below: |size/M|0,03|0,06|0,12|0,24| |-|-|-|-|-| |32x32|0,0102|0,0081|0,0076|0,0068| |64x64|0,0096|0,0075|0,0067|0,0062| |128x128|0,0083|0,0077|0,0063|0,0059| Bigger images tend to have lower MSE scores, as they have more smooth and redundant regions (e.g., backgrounds, large continuous color areas), which are easier for an autoencoder to reconstruct. ``` could authors provide figures for the performance vs. training time and performance vs. number of parameters for unsupervised methods with and without DDS ? ``` - We agree with the reviewer that this is a very interesting topic. Unfortunately, we did not store any information regarding the computational time. However, we ran some simple tests, showing that both training and inference time, per epoch, is increased between 20-40% with respect to the same architecture without our module. Besides that, the extra time will be directly related to the architecture selected for our model. This is now mentioned in the text. Sadly, the time and space limitations prevented us from thoroughly studying the effect of the number of parameters on performance. ``` can authors elaborate on the impact of M and provide ablation studies across this hyperparameter? Would be nice to show how the clustering performance/reward varies? the optimal is probably dataset-dependent and as such would require some hyperparameter tuning. ``` - This concern is addressed in our responses to Reviewer ESRv and Reviewer GsU7. We provide an ablation on various values of M. Additionally, we expanded the evaluation to a second dataset (Stanford Cars), demonstrating that the same trend holds. We also clarified how the choice of M relates to latent dimensionality and task difficulty. Indeed, optimal M is dataset-dependent, and we now make this explicit in the revised text and mark it as an important axis for tuning in practice. ``` could authors give some insights into the features selected with this approach in both the datasets used for clustering and world models? ``` - In both cases the algorithm tend to select regions with high contrast, like contours (see Figure 7). As a result, selected features consistently correspond to crucial structural aspects, shapes, and textures with small pixel patterns. This approach enables highly efficient compression into a compact latent space while maintaining structural fidelity and generating sharp, clear reconstructions. ``` I did not really understand the justification for the "perceptual loss"... ``` - The perceptual loss intervenes in our two-stage training procedure. Initially, we train and freeze the UNet’s and upscaling modules. Subsequently, we train a compact VAE on the intermediate representations (h). Reconstruction errors are computed at multiple intermediate layers of the already trained (frozen) upscaling module, thus forming a perceptual loss. This loss ensures that the VAE prioritizes structural coherence in the encoded representation (h) rather than pixel-wise MSE alone. ``` I believe the world modeling section of the empirical evaluation could be extended notably with additional datasets. I think an interesting additional experiment would be to propose an analysis of the features selected so that we can qualitatively/visually assess the features selected by the additional module. It would improve help further validate the proposed method. ``` - Following this valuable suggestion, we have strengthened the world model experiment by adding an extra environment (SuperMarioBros-v0) and incorporating a recent advanced baseline: Masked Autoencoders (MAE) [CVPR 2022], which employs an asymmetric ViT encoder-decoder architecture. As with DDS+VAE, we adopted a two-stage training strategy: first, we trained the MAE by masking 75% of input patches, then we trained a compact VAE on the latent patches. Quantitative results demonstrate DDS's superior reconstruction quality across datasets: MSE |Dataset|VAE|MAE+VAE|DDS+VAE| |-|-|-|-| |CarRacing-v3|0.00165|0.00220|0.00039| |SuperMarioBros-v0|0.00134|0.00114|0.00105| Moreover, DDS produces significantly more realistic "dream" sequences as measured: |Dataset|Metric|VAE|MAE+VAE|DDS+VAE| |-|-|-|-|-| |CarRacing-v3|FID|59.46|54.84|25.35| |CarRacing-v3|FVD|239|312|176| |SuperMarioBros-v0|FID|64.06|60.21|61.08| |SuperMarioBros-v0|FVD|412|465|338| ``` why are tables 1 and 2 distinct ?... ``` - We merged Tables 1 and 2, as suggested.
Summary: This paper proposes a Dynamic Data Selection (DDS) method, which is an unsupervised dynamic feature selection method designed to enhance latent representations. The authors claim that DDS can improve the model performance in some unsupervised tasks, such as clustering and world models, by removing noisy or redundant features. The memory consumption of DDS is minimal and is invariant to the maximum number of selected features. Moreover, the authors claim that the porposed method is easily adapted to a variety of problems and architectures. Experimental results on synthetic and real-world datasets show that DDS improves clustering metrics (NMI, ACC, ARI) and reduces reconstruction errors in world models. Claims And Evidence: Supported Claims: (1) DDS improves clustering performance while using fewer features. This is supported by Table 1 and 2. (2) DDS enhances the reconstruction quality of world models (see Table 3). (3) DDS is computationally efficient and preserves input structure. Problematic Claims: (1) The authors claim that DDS is invariant to the maximum number (i.e. M) of selected features, but no ablation studies about M are presented. Although Table 5 presents the reconstruction MSE over CIFAR-10 retaining different M features, the results on only one dataset are not convincing. Besides, the authors do not provide the reasons or references for M=64,128,... in Table 5. Methods And Evaluation Criteria: Strengths: (1) Synthetic datasets and world models are considered for comparisons. (2) Comprehensive comparisons with state-of-the-art clustering methods are conducted. Weakness: The authors do not provide descriptions or references of ACC, NMI, ARI, even though these metrics are common. The authors need to reorganize the structure of experiment part. It is hard to capture the key experimental results and analyses clearly. Theoretical Claims: The authors do not provide any proofs in this paper, but they give the sufficient formulations to explain their motivation. The drawback is that some notations and definitions, such as $\Theta$, $\Vert \cdot \Vert_0$ (i.e. $\ell_0$-norm), are not explained. Experimental Designs Or Analyses: (1) Most of the experimental results and visualizations are benefitial to soundness. However, the organization of experiment section ,e.g. section 4.2, is confusing. It is better to reorganize the structure of experiment part, including the settings, metrics and different case studies (clustering and world models). (2) In section 4.2, the authors spend a lot of time introducing world models and do not intuitively show the relevant experimental results and analysis. Some algorithms should appear in the method or related work section rather than the experiment section. (3) The authors do not provide descriptions or references of ACC, NMI, ARI. (4) The parameter analysis (Figure 5) is conducted on only one dataset CIFAR-10 under 4 groups, e.g. $(\kappa,\varepsilon)={(0,0.1),(0.1,0),(0.1,0.1),(1,0.1)}$ for clustering. It is better to add more groups about $(\kappa,\varepsilon)$ and datasets. Supplementary Material: The authors do not provide the supplementary material. Relation To Broader Scientific Literature: The paper is based on dynamic feature selection, e.g. L2X (Chen et al., 2018) and INVASE (Yoon et al., 2018), and unsupervised representation learning, e.g. , ProPos (Huang et al., 2022). The proposed method extends DFS to unsupervised settings. Essential References Not Discussed: The paper reasonably cites the basic theory of DFS. A recent literature, Soham Gadgil, Ian Connick Covert, Su-In Lee: Estimating Conditional Mutual Information for Dynamic Feature Selection. ICLR 2024, can be discussed. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: Why does DDS+ProPos perform worse than ProPos on CIFAR-10, with about a 1% reduction in clustering results? The authors claim that "The DDS+ProPos model maintains similar results to the original ProPos, even when using much fewer selected features", but the results in Table 1 do not directly prove this claim. I suggest that the author could use other forms of presentation to illustrate this claim. The current form of Table 1 makes it easier for the reviewer to see that the proposed results are significantly lower than those of ProPos. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for this kind evaluation. Below we will answer some of the concerns regarding this contribution. ``` The authors claim that DDS is invariant to the maximum number (i.e. M) of selected features, but no ablation studies about M are presented. ``` - We agree with the reviewer that this point is not properly explained. Since the output of the selector module $\tilde{g}(\mathbf{X}; \mathbf{\Theta}_S)$ is a vector of the same size as the input data, the module architecture has the same size, no matter which M value is chosen. Thus, it only differs in two key points: - Finding the top-M values: Although it is not clearly stated in the documentation, we performed a quantitative estimation of how M affects the time in a toy example. It suggested a complexity near to $\mathcal{O}(\log M)$. - Creating the mask $\mathbf{\Gamma}_M$: The complexity is $\mathcal{O}(N)$, being N the number of total features. So, it does not depend on the number of selected features. Thus, since both the training and the inference complexity are higher than $\mathcal{O}(N)$, which is higher than $\mathcal{O}(\log M)$, we can conclude than the algorithm complexity is invariant to M. This information is now included in the Appendix. ``` Although Table 5 presents the reconstruction MSE over CIFAR-10 retaining different M features, the results on only one dataset are not convincing. Besides, the authors do not provide the reasons or references for M=64,128,... in Table 5 ``` - We have amended the text to clarify this point. We have selected this number of features because it matches the number of features of the latent representation of the Naive Autoencoder, when changing the initial number of channels in multiples of 2. We have also introduced a new comparison using the Stanford Cars dataset. Since the image sizes are bigger (360x240), we have also selected bigger M values to match them against the number of features obtained by the internal representation of the naive AutoEncoder. The obtained results are as follows: |M (% of total features)|Naive MAE|Naive MSE|DDS MAE|DDS MSE| |-|-|-|-|-| |3136 (1.2%)|7,99E-02|1,33E-02|7,48E-02|1,18E-02| |6272 (2.4%)|6,73E-02|1,05E-02|3,83E-02|3,47E-03| |12544 (4.8%)|4,52E-02|5,67E-03|2,26E-02|1,27E-03| |25088 (9.7%)|3,55E-02|3,47E-03|1,34E-02|4,91E-04| |50176 (19.4%)|3,09E-02|2,86E-03|6,45E-03|1,00E-04| The results follow the same pattern presented with the CIFAR-10 dataset: as we double the number of features, the error is almost halved. In contrast, a classic deep representation cannot reduce the error by that much. Due to the space constraint, we decided not to show these results, as they do not provide extra information, but will include them in the Appendix section. ``` Why does DDS+ProPos perform worse than ProPos on CIFAR-10, with about a 1% reduction in clustering results? ``` - The text is now revised to make a more accurate claim. We think the reason behind the worse results in CIFAR-10 is related to the image quality, as they are very small and blurry. Thus, by selecting only 25% of the features, the algorithm cannot capture the essence of the objects included. We think a drop of 1% in clustering results, when dropping 75% of the image features, is a very interesting tradeoff. The sentence is now replaced by this explanation. ``` The parameter analysis (Figure 5) is conducted on only one dataset CIFAR-10 under 4 groups for clustering. It is better to add more groups about and datasets. ``` - We agree that a more thorough experimentation would improve the parameter analysis. However, due to the time and space limitations, we had to select a representative set of experiments for this part of the experimental section. --- Rebuttal Comment 1.1: Comment: Thank you for the responses. They addressed most of my concerns. I maintain the rating to weak accept. --- Reply to Comment 1.1.1: Comment: Thank you for kind review. Do not hesitate to ask any other questions or suggestions you may have regarding our paper.
Summary: This paper proposes a method for pixel masking. The masking module is introduced to mask certain pixels of the input image and replaces the original input image with its masked version. The module is trained using a reconstruction loss, and the paper claims that using such masked version of the input during both training and inference improves the quality of final latent representation. The paper demonstrates the effectiveness of the method on clustering and world model tasks. ## update after rebuttal In the rebuttal, the authors have addresses most of my concerns, particularly regarding the limited experimental settings, analysis and comparisons, I raise my score to Weak Accept. Claims And Evidence: Although the authors demonstrated the effectiveness of the proposed method through experimental results, it seems to me that the experiments are conducted under limited settings to verify general effectiveness in improving the latent representations, and there is no explanation or discussion of why their method should be effective. More details are listed below. Methods And Evaluation Criteria: - In cooperation the proposed method into an existing VAE (world model) or combining it with the ProPos method (clustering) has shown to be effective. - However, as aforementioned, it seems to me that this paper does not provide any explanation or discussion on why this method ultimately contributes to improving the latent representation. - From the perspective of reducing data information through the reconstruction loss, this method is similar to PCA, and it appears related to previous cases in which reconstruction loss enhanced the quality of latent representations [A, B], but these connections are not discussed. - Also, since pixel masking and reconstructing the original image is included in training, it seems related to MAE, yet there is no relevant discussion. [A] TAFSSL: task-adaptive feature subspace learning for few-shot classification, ECCV 20 [B] Unsupervised Embedding Adaptation via Early-Stage Feature Reconstruction for Few-Shot Classification, ICML 21 [MAE] Masked Autoencoders Are Scalable Vision Learners, CVPR 2022 Theoretical Claims: No theoretical claims raised. Experimental Designs Or Analyses: - As mentioned above, it seems that the experiments were conducted under limited settings. Since the paper claims an improvement in general latent representations, showing improvement only on clustering and world model tasks seems limited, and it only compares against old baselines. - More general benchmarks are recommended to assert an improvement in general latent representation as stated in the paper, e.g., self-supervised learning benchmarks, which includes a variety of downstream tasks such as classification,r etrieval, and segmentation. - More ablation study will strengthen the paper, such as whether pixel masking applied only during training improves the performance (as in MAE) or whether it could be applied to a frozen pretrained model. - It seems that in some of the experiments the model was trained for a much longer training epochs, compromising the fairness of the comparison. Supplementary Material: I review the qualitative comparison results of world model in supplementary material. However, since only a single trajectory result was presented, it seems to be not enough to conclusively evaluate the effectiveness.. Relation To Broader Scientific Literature: This paper investigates a challenging yet versatile approach to improving latent representation quality in an unsupervised manner, independent of any specific task. Essential References Not Discussed: Importantly, there should be a comparison and discussion with methods that use reconstruction criteria, e.g., PCA, for feature extraction. Moreover, methods based on masked image modeling have demonstrated effectiveness in learning good representations, the paper lacks of discussions in this area. Other Strengths And Weaknesses: Strengths - Method is very simple and can be incorporated into any architecture taking image input. Other Comments Or Suggestions: - Please address the weaknesses listed in the comments regarding the method and the experiments. - Typo: (5.2) as feature work -> as future work Questions For Authors: - Comparison or incorporating more recent methods will strengthen the paper. Currently all the baselines are presented before 2022. - How does DDS masking perform if it is only applied during training? As in MAE training, may the representation quality simply be improved because the model is trained to reconstruct images with partially masked pixels? - Is performance improved only when the DDS is applied both training and inference? Ethical Review Concerns: No concern Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insights. Below we will discuss some of the concerns point by point. ``` why this method contributes to improving the latent representation ``` - The paper now explains the intuition behind our idea. In essence, we argue that every sample often contains information that is either irrelevant or leads to erroneous interpretations. In images, this can occur due to background elements. Thus, we aim to improve the latent representations by removing such information, which may appear in different locations depending on the input data. This is why our solution employs a dynamic feature selection approach. ``` From the perspective of reducing data information through the reconstruction loss, this method is similar to PCA, and it appears related to previous cases in which reconstruction loss enhanced the quality of latent representations [A, B], but these connections are not discussed ``` - We thank the reviewer for bringing up these two interesting contributions, which are now included in the paper. Contrary to the linear latent representation provided by the PCA, our algorithm is attached to models that aim for a more complex representation. The idea of our contribution is to enhace this model's capability by attaching our module to it. Thus, our goal is to demonstrate that, when attaching our module to any given model, in any given specific unsupervised task, it is capable to increase its performance. This information in now included in the introduction section, aiming to clarify the main contributions of this paper. ``` it seems related to MAE, yet there is no relevant discussion. ``` - Following this suggestion, we have included MAE in the experimental results regarding the world model problem. Please check our response to Reviewer bAdm for detailed results. ``` some of the experiments the model was trained for a much longer training epochs, compromising the fairness of the comparison. ``` - The reviewer raises a valid point regarding the longer training times for DDS+ProPos compared with ProPos. We have amended the text to clarify that ProPos’ training plateaus long before it reaches the tested number of epochs, with its results not changing significantly for longer training. Although ProPos achieves the listed performance with fewer training epochs, when complemented with DDS it continues to improve. This is highlighted by the experiment that allows twice as many training epochs for DDS+ProPos in the Tiny-Imagenet dataset, which obtains an 80% improvement in ARI with respect to ProPos. The text now explains this in detail. ``` Comparison or incorporating more recent methods will strengthen the paper. ``` - We agree with the reviewer that some novel baselines, like SPICE or its variants, could be included. We also want to mention that we discard most of them because they either resize the images, or they use additional training data. However, we consider the suite of methods that we tested offers a thorough comparison ``` How does DDS masking perform if it is only applied during training? May the representation quality simply be improved because the model is trained to reconstruct images with partially masked pixels? Is performance improved only when the DDS is applied both training and inference? ``` - We agree that our paper would benefit from more thorough experimentation. We added two extra experiments to the ablation section: - Ex1: Removing the selection mechanism during inference (equivalent to force all feature selection scores to be 1, for all pixels). - Ex2: During inference, keep the M selected features, but forcing its importance to be 1, regardless of the score. The obtained results are as follows: | M | 64 | 128 | 256 | 512 | 1024 | | ----- | ---- | ---- | ---- | ---- | ---- | |Naive|,018|,012|,008|0,005|0,004| |DDS|,016|,009|0,005|0,001|0,0002| |Ex1|,2|,2|,2|,2|,2| |Ex2|,2|,2|,2|,2|,2| These results show, as expected, very poor reconstructions with both Ex1 and Ex2. By design, the weights associated with masked inputs can't be trained during training, but greatly impact the model's output at test time, hence the poor performance. These results are consistent with DDS’s role as a feature selector and don’t hinder its ability to obtain better latent representations. Due to the time and space limits, we couldn’t add any experiments relating to how the algorithm behaves when we don't constraint the number of selected features to be M during inference, although we list them as a line to explore in future work. ``` More general benchmarks are recommended to assert an improvement in general latent representation, e.g., self-supervised learning, classification, retrieval, or segmentation. ``` - We agree that a more thorough experimentation would improve this contribution. Given the constraints on time and space, we focused on presenting a carefully chosen subset of experiments that best represent our contribution. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my concerns. \ I appreciate the additional comparison with MAE, the more experimental settings (e.g., SuperMarioBros), and the additional ablation results. However, it seems to me that the following concern remains: - About "why this method contributes to improving the latent representation". It still seems to me that the explanation does not clarify why “pixel masking” specifically must be chosen to remove irrelevant information. Overall, I raise my score to Weak Accept, as the rebuttal mostly addressed my concerns, particularly regarding the limited experimental settings, analysis and comparisons. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback. Our motivation for choosing pixel masking stems from our desire to implement a dynamic feature selection mechanism that allows explicit control over the amount of information retained or discarded. In contrast to regularization techniques, which often require careful tuning and may offer less direct control over the degree of information suppression, pixel masking provides a more interpretable and configurable way to selectively filter input signals. By adjusting the masking ratio, we can precisely regulate how much information is passed through to the latent representation, enabling us to systematically study and influence the abstraction process. This targeted control aligns closely with our goal of improving latent representations by progressively filtering out irrelevant or redundant features in a structured manner. In practice, pixel masking was our best attempt to implement the objective described in Eq. 1, as it offers a straightforward and controllable mechanism for information selection. That said, we acknowledge that other approaches could also be used to address this objective—potentially including more advanced or learned selection mechanisms—but exploring those alternatives is beyond the scope of this paper. We appreciate you pointing out the need for further clarification, and we will make sure to explicitly explain this motivation in the revised version of the paper.
null
null
null
null
null
null
null
null
🏆 COPA: Comparing the Incomparable to Explore the Pareto Front
Reject
Summary: The paper proposes COPA (Comparing the Incomparable to Explore the Pareto Front), a novel approach for comparing and aggregating multiple objectives in machine learning model selection. The authors address the challenge of meaningfully comparing objectives with different scales and semantics (e.g., accuracy vs CO2 emissions) by transforming them through their CDFs, approximated by relative rankings. This allows for principled navigation of the Pareto front while respecting user preferences. The method is demonstrated on several important applications including LLM selection, domain generalization, and AutoML benchmarking. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: Related to some extent. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: - The approach is theoretically well-grounded with clear analysis of properties - The problem being solved (comparing incomparable objectives) is highly relevant to modern ML - Extensive empirical validation across multiple important domains - Clear practical impact for model selection and benchmarking - The implementation is relatively simple yet effective Weaknesses: - The computational overhead of computing rankings for large model populations could be discussed more - Some discussion of failure cases or limitations would be valuable - Additional ablation studies on the choice of p parameter could help inform practical usage - The connection to existing work in multi-criteria decision making could be expanded Other Comments Or Suggestions: None Questions For Authors: - How does the method scale with very large numbers of models/objectives? - Are there cases where the CDF transformation could be misleading? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive and positive feedback. We are especially grateful for the kind words towards our work, acknowledging its importance and potential impact on modern ML. It is also encouraging to see the reviewer confirming the validity of our approach and derivations, as well as of the empirical validation of COPA. > The computational overhead of computing rankings for large model populations could be discussed more. > How does the method scale with very large numbers of models/objectives? We agree, and we will add a paragraph discussing the overhead of COPA in detail for the camera-ready. Glossing over minor details, an implementation of COPA consists of sorting K arrays, each of them containing the performance of N models (see the implementation on the notebooks in the supplementary material). As a result, the overall time complexity of COPA is of $O(K N \log N)$, which is comparable to many light-weight preprocessing steps commonly used in the ML pipeline. > Some discussion of failure cases or limitations would be valuable. > Are there cases where the CDF transformation could be misleading? We provide COPA, and thus the CDF, as a complement (not a replacement) of the original objectives as there is no need to discard them (i.e., the marginals). In fact, *in figures 1-3, 5 and table 2, we plot the Pareto-front exploration in the original objective space* to enable decision makers to perform intra-objective comparisons. Otherwise, we would lose vital information regarding the marginal information such as sudden phase-changes, as rightfully pointed out by reviewer w3Vj. Regarding the limitations of the CDF, we can think of one main case where the transformation can be misleading: If the objectives turn out to be discrete rather than continuous (as assumed in line 55), then the resulting variable will no longer resemble a standard uniform one (as claimed in lines 180-184). We will include a paragraph stressing the need to meet our assumptions in the camera-ready version. > Additional ablation studies on the choice of p parameter could help inform practical usage. While we already try to provide some guidance and intuition on the choice of $p$ to the practitioners, especially in the last paragraph before section 4 and the experiment in figure 3, we acknowledge that additional experimental results could help choosing $p$. Using the simple notebooks provided in the supplementary material, we will add extra results of the existing experiments using different values of $p$. > The connection to existing work in multi-criteria decision making could be expanded. We will expand the existing discussion of related works. Besides MOO ML (i.e. estimation of the Pareto front) and multi-objective Bayesian Optimization, we will discuss existing works in multi-criteria decision making. We invite the reviewer to share any specific work they could have in mind and that we might have missed in the first version. We appreciate the reviewer for their time and questions. We hope to have sorted out all existing questions and, if that were the case, we kindly ask to revisit the review if it feels appropriate. If there were further questions, we are happy to address them in the next phase on the rebuttal.
Summary: This paper proposed "COPA: Comparing the Incomparable to Explore the Pareto Front". The authors claim that it is often unclear how one should compare, aggregate and, ultimately, trade-off these objectives, as they might be measured in different units or scales. The author proposed to make incomparable objectives comparable via their CDFs, approximated by their relative rankings. Claims And Evidence: The author proposed three interesting case study to show the effectiveness of their method. Methods And Evaluation Criteria: Authors use the leading board model, Open LLM Leaderboard (Fourrier et al., 2024), which is pretty new and appropriate. Theoretical Claims: Theoretical claims are largely based on previous literature, e.g, (Miettinen, 1999, Thm. 3.4.1). Experimental Designs Or Analyses: Experiments are conducted on three case studies. Supplementary Material: I have roughly gone through the supplementary material and the results seem correct. Relation To Broader Scientific Literature: This paper is highly related to LLM evaluation and content moderation. Essential References Not Discussed: 1. DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models. This paper offer a MOO cretia of MOO. 2. Panacea: Pareto Alignment via Preference Adaptation for LLMs. This paper is a paper combining post-training of LLM and multiobjective optimization. Other Strengths And Weaknesses: Strength: 1. This paper combine MOO and modern LLMs. Other Comments Or Suggestions: 1. The notation of y1 ∼ U(0.02, 0.2) is not proper. Consider change to y_1 \in [0.02, 0.2]. Questions For Authors: 1. For the LLM models as evaluated, do you use public data or your train those models and evaluate it by yourself. 2. My main concern is that, both the proposed MOO methods and evaluated models are proposed by other papers. For example, Tchebycheff or the p-norm aggregation function has been proposed for years. Up to now, it seems that this paper only merge these two directions. If that is true, the contribution of this paper seems limited to me. Is there new technical contribution in this paper? 3. (Cont. with 2) For example, line 292 right. It seems that the author just gather the results from some existing models. Therefore, what is the contribution of this work? 4. Line 320 right, CelebA is not a LLM benchmark. What is the purpose of using CelebA as an introduction here? Ethical Review Flag: Flag this paper for an ethics review. Ethical Review Concerns: NA. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their work, and we are happy to hear that the cases for which we apply COPA are interesting, and that our evaluation is new and appropriate. We hope the following helps the reviewer better understand our work. > This paper combine MOO and modern LLMs. > This paper is highly related to LLM evaluation and content moderation. First, we want to stress that the scope of our work is multi-criteria evaluation in modern ML in general, being LLM selection one of our use cases in Section 5, which include LLM selection, domain generalization, fair ML, and AutoML benchmarking. > It seems that the author just gather the results from some existing models. Therefore, what is the contribution of this work? Indeed, we gather publicly available evaluation data for all experiments except the synthetic and FairGrad ones (as stated in the appendix), which does not diminish the value of our work (at the end, these are our “datasets”) and ***only corroborates how broad and common of a task multi-criteria evaluation is***, and thus the potential impact of COPA. Our main contribution is a simple, yet general, approach to evaluate, compare and select ML models in terms of several (often) non-comparable objectives, by casting the problem to a probabilistic MOO problem (see lines 75-100 for further details). The simplicity of COPA is not a weakness, but a strength, and can potentially have significant impact in many areas of ML, as acknowledged by reviewer UMh1. > […] Tchebycheff or the p-norm aggregation function has been proposed for years. [...] Is there new technical contribution in this paper? While we hope to have cleared out the contributions of our work, let us remark that the proposed weighted $p$-norm in Eq. 12 is a novel and technical contribution. In lines 234-236 we discussed the differences between this norm and the usual weighted $p$-norm, which does not serve to intuitively map user preferences. To make this point more clear, we have reproduced figure 1 of the main paper using the usual weighted p-norm (see [the new figure here](https://anonymous.4open.science/r/rebuttal-copa/diff-pnorms.png)). Furthermore, matching the Tchebycheff problem with $p=\infty$ is just one property of our norm that, however, does not hold for the regular weighted p-norm (as it can be [seen here](https://en.wikipedia.org/wiki/Generalized_mean#Special_cases)). > CelebA is not a LLM benchmark. What is the purpose of using CelebA as an introduction here? CelebA is the dataset of one of the 5 total experiments we show in the main manuscript, covering many areas in ML (LLMs, fair ML, MTL, domain generalization, and AutoML). In our experiments CelebA is used to show how COPA enables practitioners to choose ML models that achieve a sensible fairness-accuracy trade-off. > DecodingTrust and Panacea references. We appreciate the shared references, and we will add them to the camera-ready revision as related work. Regarding DecodingTrust, we found it really interesting as it serves as yet-another-use-case for which the adoption of COPA can have a real impact, as the authors provide a set of objectives to evaluate LLMs and attempt to make them comparable (Appendix I.1), ultimately taking their average as score. Given the similar format of the DecodingTrust and Open LLM Leaderboards, we have applied COPA to the formed leaderboard too, sorting the considered LLMs with different user-given preferences (we provide [tables for three values of $p$](https://anonymous.4open.science/r/rebuttal-copa/decodingtrust-rankings/decodingtrust-p=inf.pdf)). We will include the results as an additional experiment in the appendix. Among the most interesting take-aways, with COPA we can see that GPT-4 ranks among the least robust models in terms of DecodingTrust objectives (as it is the least fair LLM), while it is the 6th best model using the provided Overall score, as shown in [their online leaderboard](https://decodingtrust.github.io/leaderboard/). We hypothesize that the reviewer flagged our work for an ethics review by mistake. Otherwise, we would appreciate some explanation in this regard. We hope to have addressed all the concerns from the reviewer. If so, we would appreciate it if the reviewer could revisit their review to reflect these changes. We are happy to clarify any further questions in the next phase of the rebuttal period.
Summary: The goal of the paper is to address the challenge of multi-objective machine learning evaluation where objectives are often incomparable due to differing semantics, units, and scales (e.g., comparing model performance and CO2 emissions). It proposes a novel method, COPA (Cumulative-based Optimization of the Pareto front), to make such objectives comparable. The COPA algorithm consists of the following main steps: 1. Problem Setup - Define the multi-objective optimization problem as: $\min_{h \in H} \mathbf{y}(h) = [y_1(h), y_2(h), \dots, y_K(h)]$ where $\( \mathbf{y}(h) \) is a vector of K objectives for model \( h \)$ 2. CDF Normalization - Normalize objectives using their CDFs: $u_k = F_k(y_k) \sim \text{U}(0, 1)$ where \( F_k(y_k) \) is the cumulative distribution function of the \( k \)-th objective. When CDFs are unknown, approximate them using relative rankings: $ \hat{u}_i = \frac{\text{rank}(y_i)}{N}. $ 3. Preference Integration - Define a criterion function \( C \) to aggregate normalized objectives. For example, use a weighted \( p \)-norm: $ C(\mathbf{u}) = \left( \sum_{k=1}^K |\omega_k \cdot u_k|^p \right)^{1/p}, $ where $\( \omega_k \)$ are objective importance weights $(\( \sum \omega_k = 1 \))$, and $\( p \)$ determines the aggregation method (e.g., $\( p = \infty \)$ for robust optimization). 4. Optimization - Solve the optimization problem: $ \min_{i = 1, \dots, N} C(\hat{\mathbf{u}}_i), $ where $\( \hat{\mathbf{u}}_i \)$ is the vector of normalized objectives for model $\( i \)$. 5. Model Selection - Select the model(s) with the smallest value of $\( C(\hat{\mathbf{u}}) \)$, reflecting the desired trade-off. Claims And Evidence: Claims are clear. Methods And Evaluation Criteria: I don't think it makes sense. Users could have complex preference which is hard to scoring by weight. It is not proper to assume each there could be a weight between different objectives. For example, users might like to maximize A+2B when C<0.5 whereas maximize A+B+C when C>=0.5. This phenomenon is significant when the number of objective are larger than 2. Thus, we cannot assume the criterion function C must be differentiable and easy-to-be-optimized in 3.3 Incorporating Preferences into the Optimization. Theoretical Claims: No theoretical claims in the paper Experimental Designs Or Analyses: See Methods And Evaluation Criteria Supplementary Material: code readed Relation To Broader Scientific Literature: LLM, multi-objective optimization. Essential References Not Discussed: Missing related Pareto front estimation related work - Pareto Merging: Multi-Objective Optimization for Preference-Aware Model Merging - MAP: low-compute model merging with amortized pareto fronts via quadratic approximation etc. Other Strengths And Weaknesses: Strengths: COPA uses CDFs to normalize objectives, ensuring all objectives, regardless of their semantics or scale, are comparable. It is objective-agnostic and preserves Pareto-optimality. Weaknesses: 1. I am skeptical about the project's incentives, particularly based on Figure 1. The statement, "This is reflected in the retrieved LLMs where, for α = 1/2, COPA finds a top-18% model for both objectives, while all other approaches select either a high-performing but CO₂-intensive model or a low-performing but ‘CO₂-free’ model," suggests that the authors assume a compromise between performance and carbon footprint is inherently preferable. However, this assessment should depend on user preferences. For instance, if a user has no carbon credits available for emissions, a completely "CO₂-free" model should be the better choice. These work should more focus on getting the correct Pareto front rather than help user to pick the optimal choice. As long as we have a perfect Pareto front, practitioners can immediately find their own optimal choice according to their preference. 2. Based on the summary and Weakness 1, the main contribution of the paper appears to be in the second step: CDF estimation (solution to "How can we make objectives comparable?"). However, I do not think this trick is a sufficient contribution for a full paper. 3. Taking a step back. The weakness of CDF estimation also have some problem - The approximation depends on the sample size N. With very small N, the accuracy of the estimated CDFs may degrade, potentially impacting the results. - Rankings discard precise information about the relative distances between objective values, which might be important in some cases. Sometimes, certain thresholds could be critical for some metrics. E.g. to pass a course, one need to get 60; some dynamics would have phase change when a certain value is larger (or smaller) than the threshold. Ranking-based CDF estimation would cause troubles in this situation. Other Comments Or Suggestions: NA Questions For Authors: See Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. There seems to be a misunderstanding with our work, which we believe to address below. > As long as we have a perfect Pareto front, practitioners can immediately find their own optimal choice We politely disagree. As stated in the intro: ***Pareto fronts in high dimensional spaces are extremely difficult to visualize and navigate***. In fact, for our use-case in Fig. 1 we had to summarize 6 performance scores to their average to depict the 487 Pareto-optimal models (out of 2148). Also, as we discuss in depth (e.g. in lines 50-54, 63-69, or 126-154), ***traversing the front over incomparable objectives is challenging***: e.g., in Fig. 1, “Delta” maps half the preferences, $\alpha \in (0.5, 1)$, to a tiny region of the front (note the log-scale), where LLM performance is very low. The other baselines behave inversely similar, making it hard for practitioners to map their preferences to $\alpha$ (or $\omega$). In contrast, COPA maps the hypothetical (as we make clear in lines 96-100) practitioner’s preference of _equal_ performance-cost compromise to $\alpha=1/2$. Of course, practitioners may require better performing models, which they can easily find by decreasing $alpha$ as shown in Fig 1-COPA, where there are another 486 Pareto-optimal models with different performance-cost trade-offs. > It is not proper to assume each there could be a weight between different objectives We stress that ***weight-based preferences are still the de-facto approach in many MOO works*** [1-3]. While it is true that weights may not always be easy to interpret (see section 3.1.3 of [1]), we also remark that COPA overcomes this by making all objectives comparable first, for which we provide an interpretation of the weights in lines 237-251. Paraphrasing [1]: “***Only normalizing the objective functions*** can one control the method to produce solutions of a desirable nature [...] Otherwise the role of the weighting coefficients may be greatly misleading” (as it is the case with the baselines in Fig. 1). We acknowledge that there are other ways of expressing preferences—e.g., interactive methods, for which [1] devotes 80 pages—and we would love to explore them in future works. [1] Nonlinear multiobjective optimization (1999) [2] Smooth Tchebycheff scalarization for multi-objective optimization (2024) [3] Revisiting scalarization in multi-task learning: A theoretical perspective (2023) > Users might like to maximize A+2B when C<0.5 whereas maximize A+B+C when C>=0.5 ... We agree that users can often have complex preferences, and believe that COPA can readily handle many of them, including your example, where one can use COPA with the piece-wise criterion function described by the reviewer. Fig. 5 aims to illustrate how one can combine COPA with user constraints over original objectives. Inspired by the reviewer, we have slightly adapted Fig. 1 to accommodate for such a case, *see [here](https://shorturl.at/8XSpI)*. While we agree with the reviewer that users can have complex preferences that COPA may not be able to handle, we believe that this is interesting but challenging future work that does not diminish the contributions of COPA. > Thus, we cannot assume the criterion function C must be differentiable and easy-to-be-optimized There might be a misunderstanding: We do **NOT** require differentiability or easy-optimization for the criterion function, as we only need to evaluate for each model in Eq. 11 all the objectives. COPA is a multi-objective evaluation method, as stated in the intro, and we assume a given population of (already trained) models (lines 104-107). > With very small N, the accuracy of the estimated CDFs may degrade... Our theoretical results on its variance in Prop. 3.1, the ablation study in App. A.1.1, and our experiments show that COPA is well behaved for already $N=15$ (Case 3). We will clarify that like, for _any_ statistical estimator, COPA may suffer with extremely low $N$ values. > Rankings discard precise information about the relative distances between objective values We provide COPA as a complement (not a replacement) of the original objectives as there is no need to discard them (i.e., the marginals). In fact, ***in figures 1-3, 5 and table 2, we plot the Pareto-front exploration in the original objective space*** to enable decision makers to perform intra-objective comparisons. We will stress this aspect in the camera-ready. > I do not think this trick is a sufficient contribution for a full paper. We believe that, as highlighted by reviewer UMh1 in their review, ***the simplicity of our approach is a strength***, which can be applied to many problems such as LLM selection, domain generalization, and AutoML benchmarking (see Section 5). We hope to have clarified any concerns from the reviewer and, in this case, that they could reconsider their score. We are happy to solve more questions in the next round of the rebuttal. --- Rebuttal Comment 1.1: Comment: Hi, Thanks for the rebuttal. Your answer still not convince me. For Q1, "Pareto fronts in high dimensional spaces are extremely difficult to visualize and navigate" is your point. I definitely understand it is hard to visualize but I don't think it is hard to navigate. Since you agree that COPA cannot handle complex preference. Let's taking a simple weighting preference $w$ as an example. ## If your Pareto frontier is a set of points (which is most of the case in high dimensional Pareto frontier estimation) You can just traverse through the Pareto set and calculate the $u \cdot x^{i}$. Sort the result and take the maximum. You can also vectorize it to accelerate. ## Pareto Frontier is known in analytical form or defined by constraints (continuous frontier) Maximize $u \cdot x$ s.t. $g(x) = 0$ or $g(x) \leq 0$ or if the Pareto frontier itself has a parametric representation $x(t)$, the constraint is $x = x(t)$. Solve the optimization problem. (If the problem is non-convex, the result will be sub-optimum). ## Pareto Frontier is known as a generative network ### Gradient Ascent in Latent Space (Most Common & Often Fastest) Maximize $f(z) = u \cdot NN(z)$ s.t. z ~ N(0, I) (or something similar), the problem should be non-convex, the result will be sub-optimum. Other methods could improve: Bayesian Optimization, EA (e.g., CMA-ES) ### Sampling-Based Methods Latin Hypercube Sampling (LHS), Quasi-Random Sampling (e.g., Sobol, Halton sequences). ## Conclusion Again, as I mentioned: these work should more focus on getting the correct Pareto front rather than help user to pick the optimal choice. If a user has no carbon credits available for emissions, a completely "CO₂-free" model should be the better choice. Else, maybe the user should only care about performance. --- Reply to Comment 1.1.1: Comment: Dear reviewer, We appreciate the engagement, but we firmly believe that the reviewer is misunderstanding our scope and contribution. We **politely invite them to re-read the paper and our initial rebuttal carefully**. First, we remark again that the setting we consider (as clearly stated in lines 104-107) is starting with a given set of models. This **model selection scenario, where _the Pareto front is a set of points_ is ubiquitous in ML and AI as every practitioner knows**. In fact, all COPA use cases in Section 5 are taken from the ML literature and publicly available repositories, demonstrating its applicability for model selection in a wide range of ML sub-fields ranging from LLMs and fair ML (Section 5.2) to MTL and domain generalization (Section 5.3), as well as for AutoML benchmarking (Section 5.4).Therefore, we disagree with the idea suggested by the reviewer that we should be “getting the correct Pareto front rather than help user to pick the optimal choice”, as it is an interesting but different problem out of the scope of our paper. We politely invite the reviewer to evaluate our work on the basis of _what it is, what it tries to solve, and what it accomplishes_, and not on what it has never tried to be nor solve. > You can just traverse through the Pareto set and calculate the $u \cdot x^{i}$. Sort the result and take the maximum. You can also vectorize it to accelerate. This answer let us think that the reviewer has unfortunately only partially and superficially read the paper and our rebuttal. ***While one could enumerate all datapoints by hand, the major issue is comparing the many objectives in a rigorous and systematic way***. To see why, consider that in high-dimensional spaces, when objectives are not comparable, ***adopting the naive traversal and comparison that the reviewer is suggesting will exactly yield a solution that does not map to the preferences***. In fact, just by summing incomparable metrics such as CO2 consumption and performance yields the 'Naive’ approaches depicted in Figure 1, top left (or Figure 2), where half of the preference values are mapped to a small region of the Pareto front (see our previous rebuttal). > Since you agree that COPA cannot handle complex preference We invite the reviewer to re-read our rebuttal once again. At no point did we agree that “COPA cannot handle complex preference”. In order to find a middle ground, and in an act of courtesy from our side, we agreed that “users can have complex preferences that COPA may not be able to handle” since COPA, just like _any other method_, cannot perfectly solve every single conceivable query. But at the same time, ***we precisely showed that COPA can solve the constrained example that the reviewer proposed***, showing its flexibility. > If a user has no carbon credits available for emissions, a completely "CO₂-free" model should be the better choice. Else, maybe the user should only care about performance. We totally agree, and highlight that ***COPA allows to express preferences where one or more dimensions should receive 0 weight***. We do not understand however why the reviewer’s argument should imply that “allowing users to express their preference” can be a useless thing. It is very likely that a practitioner might want to have the CO2 consumption or (all the 6) performance objective(s) to be non-zero, and COPA would allow them to retrieve the optimal solution in a rigorous and systematic way: an algorithm that can be effortlessly applied to other scenarios (AutoML, fairness, etc) without the need of manually comparing dimensions and avoiding the pitfalls of comparing incomparable objectives.
null
null
null
null
null
null
null
null
Extreme Value Policy Optimization for Safe Reinforcement Learning
Accept (poster)
Summary: The paper introduces Extreme Value policy Optimization (EVO), an algorithm that enhances safety in reinforcement learning by leveraging Extreme Value Theory (EVT) to model and exploit extreme reward and cost samples. EVO features an extreme quantile constraint to capture tail risks and an extreme prioritization mechanism to amplify learning signals from rare but impactful samples. Theoretically, EVO guarantees strict constraint satisfaction at a zero-violation quantile level and exhibits lower constraint violation probability and variance compared to existing methods. Extensive experiments validate EVO's effectiveness in reducing constraint violations while maintaining strong policy performance. Claims And Evidence: The claims made in the submission are generally well-supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the paper's proposed methods and evaluation criteria are well-suited for the problem and application. Theoretical Claims: I did not find any obvious issues in the proofs for theoretical claims in the paper. Experimental Designs Or Analyses: I did not find any obvious issues in the experimental designs or analyses in the paper. Supplementary Material: N/A Relation To Broader Scientific Literature: Traditional CRL methods, such as Constrained Policy Optimization (CPO) (Achiam et al., 2017), focus on optimizing policies to ensure that the expected cost remains below a predefined threshold. However, these methods often fail to account for the variability in the cost distribution, especially in the tail, leading to frequent constraint violations. EVO addresses this limitation by explicitly modeling the tail behavior using EVT. Methods like WCSAC (Yang et al., 2021) use probabilistic constraints and approximate the cost distribution with a Gaussian model to compute Conditional Value-at-Risk (CVaR). However, Gaussian approximations are inadequate for capturing the tail behavior accurately. EVO improves upon this by using the Generalized Pareto Distribution (GPD) to model the tail, providing a more accurate representation of extreme events. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Strengths The introduction of the extreme quantile constraint based on the Generalized Pareto Distribution (GPD) is a unique contribution. This mechanism allows the algorithm to explicitly model and exploit extreme samples, which is crucial for reducing constraint violations. The paper provides strong theoretical guarantees on constraint satisfaction, violation probability, and variance reduction. These guarantees are essential for building trust in RL systems and ensuring their reliability. ### Weaknesses The integration of EVT and the proposed mechanisms adds complexity to the algorithm. This might make it challenging to implement and tune for practitioners, especially those without a strong background in EVT. In contrast, WCSAC with a Gaussian approximation seems much easier to use. The use of EVT relies on certain assumptions about the distribution of extreme events. While the paper demonstrates that these assumptions hold in the tested environments, they might not generalize to all real-world scenarios. The paper assumes that sufficient extreme samples can be collected to fit the GPD accurately. In environments where extreme events are extremely rare, this might be a limiting factor. Other Comments Or Suggestions: N/A Questions For Authors: 1 What are the minimum sample requirements for EVO to effectively fit the GPD and achieve reliable performance? How does the algorithm handle environments where extreme samples are very scarce? 2 Could the authors provide more details on the computational overhead associated with fitting the Generalized Pareto Distribution (GPD) and performing off-policy importance resampling? Specifically, how does the computational cost scale with the size of the dataset and the complexity of the environment? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's positive and insightful comments. The following are the detailed responses to the points raised by Reviewer o4Co. >The integration of EVT and the proposed mechanisms adds complexity to the algorithm. This might make it challenging to implement and tune for practitioners, especially those without a strong background in EVT. **Response:** We appreciate the reviewer’s insightful comment. To facilitate rapid implementation and support ease of use, we have open-sourced our code along with parameter settings. Moreover, EVO introduces almost no additional hyperparameters, greatly simplifying the tuning process. We thank the reviewer's insightful suggestions, and we plan to further refine and package EVO into a user-friendly library, enabling practitioners to apply our method via a simple function call without requiring deep knowledge of EVT. >What are the minimum sample requirements for EVO to effectively fit the GPD and achieve reliable performance? How does the algorithm handle environments where extreme samples are very scarce? **Response:** We appreciate the reviewer’s valuable comments. We conducted experiments with varying sample sizes (10, 20, 50, and 100) to evaluate the minimum number of samples required for EVO. As shown in Figure 1 (<https://anonymous.4open.science/r/11409-C857/experiments4.pdf>), larger sample sizes generally lead to improved constraint satisfaction. Specially, in SafetyPointCirclel-v0, EVO maintains strong constraint satisfaction and performance even with limited 10 samples. In SafetyPointGoal1-v0, constraint satisfaction is consistently achieved once sample size exceeds 20 Overall, a minimum of 20 samples is sufficient for achieve reliable performance in EVO. Additionally, in our main experiments, with the same sample size of 20, EVO demonstrates superior constraint satisfaction and policy performance compared to other baselines, indicating its effectiveness even with limited samples. To address the challenge of very scarce extreme-value samples, EVO augments extreme samples using off-policy samples and apply importance resampling to correct the distribution shift. Furthermore, in special cases where extreme samples are nearly absent or difficult to collect, offline extreme samples can be provided in advance to ensure GPD fitting remains feasible. >Could the authors provide more details on the computational overhead associated with fitting the Generalized Pareto Distribution (GPD) and performing off-policy importance resampling? Specifically, how does the computational cost scale with the size of the dataset and the complexity of the environment? **Response:** We appreciate the reviewer’s valuable comments. To evaluate the computational overhead of EVO, we measured total training time for both EVO and CPO across multiple environments, along with the time spent on GPD fitting. As shown in Table 1 (<https://anonymous.4open.science/r/11409-C857/experiments4.pdf>), EVO only adds a limited training time compared to CPO, and GPD fitting is highly efficient, taking only a few seconds in total. This indicates that EVO introduces minimal computational overhead. We also assessed the impact of dataset size by training EVO with varying sample sizes. The results show that increasing the sample size has negligible effect on overall training time. To evaluate scalability of EVO with environmental complexity, we compared training times across environments of varying difficulty. While more complex environments naturally require more time, EVO’s additional overhead remains comparable to that of CPO.
Summary: The paper proposes Extreme Value Policy Optimization (EVO), a novel algorithm for safe reinforcement learning (RL) that addresses rare but high-impact extreme events in constrained RL (CRL). Traditional CRL methods optimize expected cumulative costs (e.g., $J_C(\pi) \leq d$), which overlook tail risks (e.g., "black swan" events). EVO integrates Extreme Value Theory (EVT) to model tail distributions of costs/rewards using Generalized Pareto Distributions (GPDs). Claims And Evidence: Supported by experiments (e.g., Figure 3-4) and Theorem 4.1-4.3. However, comparisons to non-EVT quantile methods (e.g., VaR) are missing in ablation studies. Theorem 4.3 shows $\Omega < \Omega_2$ (EVO vs. quantile regression), but bias in GPD parameter estimation (e.g., $\xi$, $\sigma$) is not addressed. Methods And Evaluation Criteria: EVT-based tail modeling is novel but sensitive to threshold selection for $q_\mu$. The prioritization mechanism ($p = \omega_r + \omega_c$) is intuitive but assumes accurate GPD fits. Tasks like SafetyCarCircle1 are standard but lack dynamic/adversarial scenarios to test robustness. Theoretical Claims: - **Theorem 4.3**: Variance reduction is valid but ignores bias in $\xi$ estimation, which affects $q^{H}_{\frac{\nu}{1-\mu}}$. Experimental Designs Or Analyses: Figure 6 shows variance reduction with resampling but lacks metrics like convergence speed. Supplementary Material: - **Proofs**: Theorem B.2 assumes $q_\mu = J_C(\pi)$, conflating quantiles and expectations. Justification is needed. - **Hyperparameters**: Table 2 lists parameters but omits sensitivity analysis for $\nu$ adaptation. Relation To Broader Scientific Literature: EVO advances CRL by integrating EVT for tail risks, contrasting expectation-based (CPO) and Gaussian-based (WCSAC) methods. Connections to distributional RL (e.g., Bellemare et al., 2017) are underexplored. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths**: First to integrate EVT with CRL for tail risk mitigation. Theorems 4.1-4.3 and ablation studies validate contributions. Zero violations in experiments are critical for real-world safety. **Weaknesses**: EVT assumes i.i.d. extremes, which may fail in non-stationary RL. GPD fitting and prioritization add complexity; training time is not quantified. Other Comments Or Suggestions: No Questions For Authors: 1. How does EVO ensure policies $\pi_{k+1}$ and $\pi_k$ adhere to the quantile-based objective with off-policy data in Theorem 4.1? 2. How is $q_\mu$ initialized/updated? Could biased thresholds harm GPD fits? 3. Why omit PPO-Lagrangian/RCPO? 4. Does EVO require more samples to collect extremes? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's positive and insightful comments. The following are the detailed responses to the points raised by Reviewer hnao. >Comments regarding the accuracy of GPD fitting and the bias in GPD parameter estimation. **Response:** We appreciate the reviewer’s insightful comments and agree that parameter estimation bias exists. In our work, we use maximum likelihood estimation for GPD parameters, which inevitably introduces bias. This bias is inherent in statistical distribution fitting and not unique to our method. To assess the impact of bias, we saved samples during training and fitted both GPD and Gaussian distributions. Using the Kolmogorov–Smirnov test to measure fitting accuracy (lower values indicate better fit), we found that GPD consistently provides accurate fits across various data distributions despite with estimation bias, as shown in Figure 1 (<https://anonymous.4open.science/r/11409-C857/experiments3.pdf>). >Comparisons to non-EVT quantile methods are missing in ablation studies. **Response:** We conducted an ablation study on the EVT-based quantile, as shown by the green curves in Figure 5(a)(b) in the paper. The results show that removing it reduces policy performance, demonstrating the advantage of explicitly modeling extremes in EVO. >Questions regarding the selection and update of $q_\mu$. **Response:** In this paper, the quantile $q_\mu$ represents the safety boundary reflecting the overall constraint distribution. Rather than assuming $q_\mu$ equals the expectation, we explicitly set it to the expected constraint value under the current policy. The corresponding quantile level $\mu$ is then derived from the constraint distribution at each update. >Figure 6 lacks metrics like convergence speed. **Response:** We provide the learning curves corresponding to Figure 6, with off-policy importance resampling ablated. As shown in Figure 2 (<https://anonymous.4open.science/r/11409-C857/experiments3.pdf>), constraint satisfaction in EVO converges after $1 \times 10^6$ steps. >Omitting sensitivity analysis for $\nu$ adaptation. **Response:** We constructed experiments to evaluate the sensitivity of different $\nu$, as shown in Figure 3 (<https://anonymous.4open.science/r/11409-C857/experiments3.pdf>). The results show that EVO is robust to the initial choice of $\nu$, as it is adaptively updated during training based on current policy performance, as shown in Appendix C.4. >Connections to distributional RL are under-explored. **Response:** We will include a more detailed discussion on distributional RL in related work. While distributional RL models the full return distribution instead of the expectation, it does not address constraints. WCSAC expands distributional RL into constrained RL by introducing a distributional safety critic based on CVaR. >EVT may fail to non-i.i.d. extremes. **Response:** For non-i.i.d. data, methods such as Block Maxima or clustering modeling can be applied firstly to make the extreme samples approximately i.i.d., thereby enabling the effective application of EVT. > Training time is not quantified. **Response:** We conducted experiments across different environments, measuring both total training time and the time spent on GPD fitting. As shown in Table 1 (<https://anonymous.4open.science/r/11409-C857/experiments3.pdf>), EVO increases limited training time compared to CPO, indicating minimal computational overhead. We also evaluate the impact of sample size and found that the additional time remains minimal as the sample size increases. The GPD fitting is highly efficient, taking only a few seconds in total, with negligible overhead even at larger sample sizes. >How ensuring policies $\pi_{k+1}$ and $\pi_k$ adhere to the quantile-based objective with off-policy data in Theorem 4.1? **Response:** For off-policy data, we apply an importance resampling method, as shown in Equation (18) of the paper, to correct the distributional discrepancy between the current policy and the off-policy samples. >Why omit PPO-Lagrangian/RCPO? **Response:** We conducted additional experiments across multiple tasks, comparing EVO with PPO-Lagrangian and RCPO. As shown in Figure 4 and Figure 5 (<https://anonymous.4open.science/r/11409-C857/experiments3.pdf>), PPO-Lagrangian exhibits significant oscillation during training, and both PPO-Lagrangian and RCPO frequently violate constraints. In contrast, EVO consistently maintains constraint satisfaction across all tasks while achieving superior policy performance. In SafetyCarCircle1-v0, although RCPO achieves slightly higher returns, it exhibits substantial constraint violations, indicating that its policy is unsafe. >Does EVO require more samples to collect extremes? **Response:** EVO does not require more samples to collect the extremes. In all experiments its using the same size of samples as the other baselines.
Summary: This paper presents the Extreme Value policy Optimization (EVO) algorithm for safe reinforcement learning. It integrates Extreme Value Theory (EVT) to model and utilize extreme samples. EVO introduces an extreme quantile constraint and an extreme prioritization mechanism. Theoretically, it has a lower constraint violation probability and variance than baselines. Experiments in Safety Gymnasium and MuJoCo show that EVO reduces constraint violations while maintaining competitive policy performance. Claims And Evidence: Most of the claims in the paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: The methods and evaluation criteria proposed in the paper are consistent with the research problems. Theoretical Claims: I check the theoretical claims in the main text. I have some confusions with the theoretical parts: - In lines 210-211, the conditional probability does not follow the GPD. According to theorem 3.1, it only follows GPD as $q_\mu \to \infty$. The following argument should establish in approximately way. - Why (8) holds? It seems that $Z \le z \iff C – q_{\mu} \le q_{\mu + \nu} – q_{\mu} \iff C \le q_{\mu + \nu}$. - In Theorem 4.1, I cannot understand this sentence: “$\pi_{k+1}, \pi_{k}$ are related by quantile-based constraint objective 11 .” - The metrics investigated in section 4.4 should be defined formally and discussed more. Now It is confused that why these theorems demonstrate the advantage of EVO. Experimental Designs Or Analyses: The paper is reasonable and effective in experimental design and analysis Supplementary Material: I do not check the supplementary parts. Relation To Broader Scientific Literature: n/a Essential References Not Discussed: n/a Other Strengths And Weaknesses: See other parts. Other Comments Or Suggestions: - In the right column of line 399, there is a typo in“Figure 5a and 5a”. Questions For Authors: See other parts. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's positive and insightful comments. The following are the detailed responses to the points raised by Reviewer bFMW. >In lines 210-211, the conditional probability does not follow the GPD. According to theorem 3.1, it only follows GPD as $q_\mu \to \infty$. The following argument should establish in approximately way. **Response:** We appreciate the reviewer’s insightful comment and agree with the reviewer’s perspective. According to extreme value theory, extreme value samples asymptotically follow GPD. Therefore, we use the asymptotic equality symbol to reflect this relationship, as shown in Equations (8) and (9) in our paper. Empirically, our experiments show that GPD provides a good fit for modeling extreme value samples. To validate this, we collected training data from multiple environments and fitted both GPD and Gaussian distributions. Furthermore, we employed the Kolmogorov-Smirnov test to quantify the fitting accuracy, where lower scores indicate better fit. As shown in Figure 1 (<https://anonymous.4open.science/r/11409-C857/experiments2.pdf>), GPD presents accurate fitting performance across various data distributions, and it consistently outperforms the Gaussian distribution in capturing tail behavior. We also acknowledge that in special cases, such as when the difference between extreme and normal values is small, GPD may not accurately capture the extremes. To address this issue, nonlinear transformations or similar methods can be applied to amplify the difference between extreme and normal values, and then fit the data with GPD for improved precision. >Why (8) holds? It seems that $Z \le z \iff C – q_{\mu} \le q_{\mu + \nu} – q_{\mu} \iff C \le q_{\mu + \nu}$. **Response:** We appreciate the reviewer’s valuable comment. In this paper, the $z$ denotes the excess value beyond $q_\mu$, which contains the condition $z = q_{\mu + \nu} - q_\nu > 0$, as shown in Eq (12) in the paper. So $P(Z \le z) = P(C-q_\mu \le z|z>0) = P(C-q_\mu \le z| z>0)$, and $P(Z\le z) \neq P(C \le q_{\mu + \nu})$ . In this paper, we use the GPD $P(Z)$ to separately model the portion $Z$ of the overall distribution $C$ that exceeds $q_\mu$. According to EVT, $P(C - q_\mu \le z|C>q_\mu)$ asymptotically follows GPD, thus $P(C - q_\mu \le z|C>q_\mu) \backsimeq P(Z \le z)$. According to the Eq. (7) in the paper: \begin{equation} \begin{aligned} F_C(q_{\mu+\nu}) = P(C \le q_\mu + z) = P(C \le q_\mu) + P(C >q_\mu) P(C - q_\mu \le z|C>q_\mu) \end{aligned} \end{equation} Then we can get Eq (8): \begin{equation} \begin{aligned} F_C(q_{\mu+\nu}) = P(C \le q_\mu + z) = \mu + \nu \backsimeq \mu + (1-\mu)P(Z\le z) \end{aligned} \end{equation} We appreciate the reviewer’s valuable comment for enhancing the clarity of our paper. >In Theorem 4.1, I cannot understand this sentence:"$\pi_{k+1}, \pi_{k}$ are related by quantile-based constraint objective 11." **Response:** We appreciate the reviewer’s valuable comment. This means that policy $\pi_{k+1}$ is obtained by optimizing policy $\pi_k$ according to objective function (11). Here, we follow the notation and description adopted in constrained RL works such as CPO and PCPO. >The metrics investigated in section 4.4 should be defined formally and discussed more. Now It is confused that why these theorems demonstrate the advantage of EVO. **Response:** We appreciate the reviewer’s valuable comments. Theorem 4.1 is about the constraint expectation, demonstrating that the updated policy in EVO has overall lower constraints compared to expectation-based CRL methods, which is empirically validated in multiple environments in Figures 3 and 4 of the paper. Theorem 4.2 focuses on the constraint violation probability, showing that EVO has a lower constraint violation probability during training compared to expectation-based CRL methods, as validated in Figures 3 and 4 in the paper across multiple environments. Theorem 4.3 is regarding the variance of extreme value estimation, proving that EVO provides more stable extreme value estimates compared to quantile regression methods, which is verified by the results in Figure 6 of the paper. We hope these discussions help clarify the theorems in Section 4.4. > In the right column of line 399, there is a typo in "Figure 5a and 5a". **Response:** We greatly appreciate the reviewer’s careful review and will correct this typo in the revised manuscript.
Summary: The authors propose a novel approach, Extreme Value Policy Optimization (EVO), to handle rare but high-impact extreme value events by using the Extreme Value Theory (EVT). The EVO introduces an extreme quantile optimization objective and an extreme prioritization mechanism. Extensive experiments are conducted to demonstrate the effectiveness of EVO in terms of the reduction in constraint violations while maintaining the performance. Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence. Methods And Evaluation Criteria: The methods and evaluation criteria in the paper are aligned with the target problem. Theoretical Claims: The proofs for theoretical claims are correct. Experimental Designs Or Analyses: The experimental designs are sound and valid. Supplementary Material: I didn't check the supplementary material. Relation To Broader Scientific Literature: The paper addresses the limitation of expectation-based constrained reinforcement learning (Achiam et al 2017). EVO makes the improvement by explicitly modeling extreme events by EVT. Essential References Not Discussed: No Other Strengths And Weaknesses: Strength: The paper uses EVT to constrained RL to capture rare but high-impact extreme value events that previous methods overlook. Weakness: The paper depends on the assumption that the extreme value events follow Generalized Pareto Distributions, which may not be true. Other Comments Or Suggestions: No. Questions For Authors: Could the authors provide more details about the sample size for EVO to fit the GPD? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's positive and insightful comments. The following are the detailed responses to the points raised by Reviewer csFL. >The paper depends on the assumption that the extreme value events follow Generalized Pareto Distributions, which may not be true. **Response:** We appreciate the reviewer’s insightful comments. According to extreme value theory, as extreme values increase, these samples asymptotically follow the GPD, which is proven to be effective for modeling extreme events in prior studies [1][2]. To evaluate the fitting accuracy of GPD in our experiments, we collected training data across multiple environments and fitted both GPD and Gaussian distributions. Furthermore, we employed the Kolmogorov-Smirnov (KS) test to quantify the accuracy of the GPD and Gaussian fits, where lower values indicate more accurate fits. As shown in Figure 1 (<https://anonymous.4open.science/r/11409-C857/experiments1.pdf>), GPD presents accurate fitting performance across various data distributions. And GPD provides a more accurate fit for extreme samples than the Gaussian distribution and better captures the tail behavior of the distribution. We also acknowledge that in special cases, such as when the distinction between extreme and normal values is small, GPD may not provide a satisfactory fit. To address this issue, we can first apply methods like nonlinear transformations to amplify the difference between extreme and normal values, and then fit the data with GPD for improved accuracy. We thank the reviewer’s constructive feedback and view this as a promising direction for extending EVO to more practical applications in future research. >Could the authors provide more details about the sample size for EVO to fit the GPD? **Response:** We appreciate the reviewer’s valuable comment regarding the sample size in EVO. We conducted additional experiments varying the sample size used for GPD fitting and evaluated the corresponding policy performance. As shown in Figure 2 (<https://anonymous.4open.science/r/11409-C857/experiments1.pdf>), larger sample sizes generally lead to improved constraint satisfaction. Notably, in SafetyPointCircle1-v0, EVO maintains strong constraint satisfaction and performance even with limited 10 samples. In SafetyPointGoal1-v0, constraint satisfaction is consistently achieved once the sample size exceeds 20. In our main experiments, with the same samples size of 20, EVO demonstrates superior constraint satisfaction and policy performance compared to other baselines, indicating that it remains effective even with relatively small sample sets. [1] NS, K. S., Wang, Y., Schram, M., Drgona, J., Halappanavar, M., Liu, F., and Li, P. Extreme risk mitigation in reinforcement learning using extreme value theory. arXiv preprint arXiv:2308.13011, 2023. [2] Siffer, A., Fouque, P.-A., Termier, A., and Largouet, C. Anomaly detection in streams with extreme value theory. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1067–1075, 2017.
null
null
null
null
null
null
Great Models Think Alike and this Undermines AI Oversight
Accept (spotlight poster)
Summary: The authors design a metric for measuring model similarity, using a combination of model performance, the type of mistakes as well as probabilistic decisions made by the model. The authors demonstrate that for various pairs of models, there is a high correlation between the measured similarity between the pair as well as the evaluation score when using one model to judge another, as well as a bias for models to judge a similar model better. They then explore the setting of weak-to-strong supervised training with a teacher model. Finally, the authors make a final observation that as models become more capable, mistakes simultaneously become more similar across a number of different model families. Claims And Evidence: ### Claim Models favour similar models when used as a judge. ### Evidence I believe that there is sufficient evidence to provide preliminary support for this claim; the LLM-as-a-Judge setting appears to demonstrate that models tend to favour similar models appears to be consistent, supporting this claim. ----- ### Claim The model similarity metric, $\kappa_p$, is an accurate predictor of model similarity. ### Evidence The authors use the model judgement score to show that the model similarity metric correlated well with judgement scores. Methods And Evaluation Criteria: The methods for evaluation are appropriate and the benchmarks/datasets are reasonably chosen based on the construction of the metric. Theoretical Claims: N/A. Appendix A.6. computes some bounds but these are generally elementary mathematics therefore I do not see any issues with their correctness. Experimental Designs Or Analyses: The design is appropriate. I believe that evaluating judgement scores and comparing them against the similarity metric is a sensible experiment. In Section 4, the experimental setup is appropriate (first evaluate for similarity, then train in a student-teacher setup) and there are no claims that appear problematic. In section 5, the authors again compare accuracy on two tasks (MMLU and BBH) and compare against accuracy, demonstrating a strong correlation between accuracy and similarity. Supplementary Material: I have seen all the supplementary material. Relation To Broader Scientific Literature: The results are complementary to work in evaluating the trustworthiness or robustness of LLMs, which have also shown a degree of bias in LLM-as-a-judge settings as well as overlap in LLM responses. The authors provide a new way to explicitly measure similarity, providing an interesting alternative interpretation to this setting. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: ### Strengths - The work is clearly motivated and the empirical experiments are well designed. - There is a clear existing use-cases for the metric the authors propose. --- ### Weaknesses - The tasks that are used for measuring similarity appears to only be based off of accuracy. However, various tasks do not directly use hard ground-truth labels, such as those for summarization or instruction following. I believe a diversification in the number of tasks that are being evaluated for can better demonstrate the benefits of the method, or reveal some potential areas that need to be taken into consideration (ex. measuring similarity within reasoning chains, etc.). - In Table 2, the correlation scores can vary quite a bit based on the model size. This may be the direct result of model capability, as the authors explore in Section 5, but I believe there may be a need to either account for model size within the metric computation or potentially better understand the relationship between the metric and model size. - While I appreciate the comment that having similar models undermines oversight, I feel that claims about model similarity have been extant for an extended period of time. I do not particularly believe that this work offers a significantly novel perspective on this front, especially as many of the claims remain speculative in some part and as a result make it difficult to draw broader claims from. Furthermore, while the metric the authors introduce can be useful, it doesn't provide a direct solution towards mitigating the problem of AI oversight. Hence in my opinion, it may be better to reframe this perspective of the work in some respect. Other Comments Or Suggestions: See above. Questions For Authors: See strengths and weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We are glad you found our **metric useful, experiments well-designed, evidence sufficient, and motivation clear**. Given these positive points, we were a bit surprised by the recommendation. We hope the new analyses and clarifications below address the concerns. **(W1)** > The tasks that are used for measuring similarity appears to only be based off of accuracy. However, various tasks do not directly use hard ground-truth labels Our metric uses ground truth labels to prevent the inflation of similarity scores as we illustrate with [new plots](https://imgur.com/a/XL7OBwH). **In applications where ground-truth labels are not available, one can still use the observed agreement $c_{obs}^p$ as computed by our metric**. Evaluating open-ended tasks is an exciting direction of future work [L392-404]. **(W1)** > I believe a diversification in the number of tasks that are being evaluated for can better demonstrate the benefits of the method We appreciate the reviewer's suggestion regarding task diversification. To further confirm the broad relevance of our method, **we extend our analysis to AlpacaEval, open-ended chat responses which do not have objective ground truth**. We find that MMLU-Pro benchmark similarity correlates strongly ($r=0.75$ to $r=0.87$) with judge preferences on AlpacaEval in [new plots](https://imgur.com/a/CIKQOmf). This reinforces conclusions in Sec 3, so we will include it in App. B of the revision. **We believe our experiments across MMLU-Pro (14 categories), BBH (23 categories, Sec 5), and 15 NLP tasks (Sec 4) are comprehensive, as noted by Reviewers RC4b and qUdH**. **(W2)** > In Table 2, the correlation scores can vary quite a bit based on the model size… better understand the relationship between the metric and model size. To check whether model size is a possible confounder, **we extend our multiple regression results by including model size as a predictor -- [new plots](https://imgur.com/a/y06LkCT)**. We find that similarity remains a significant predictor of judgement score when controlling for both model size and accuracy. Furthermore, the effect size of model size when controlling for similarity and accuracy is close to 0, confirming that our focus on model capability rather than characteristics is a sound approach, consistent with prior work [1]. **(W3)** > I feel that claims about model similarity have been extant for an extended period of time. I do not particularly believe that this work offers a significantly novel perspective on this front” While there might have been a common feeling in the community about models being similar, **our work is the first to quantify it comprehensively (as highlighted by reviewer qUdH) at scale** — showing a concerning increase in similarity with improving capability by analyzing 130 LLMs from various developers. Our analysis of LLM-as-a-judge extends existing self-preference results showing mitigations like using separate judges [4] are not enough. In Sec 6, we discussed previous work that explored model differences, and we will also cite theoretical results like [3] in our revision, which study the effects of algorithmic monoculture on fairness. **(W3)** > many of the claims remain speculative in some part We are happy to clarify our claims and provide additional support in our revision. Could you please indicate which specific claims you found speculative? **(W3)** > while the metric the authors introduce can be useful, it doesn't provide a direct solution towards mitigating the problem of AI oversight We acknowledge your observation, our metric does not directly mitigate the impact of model similarity on AI oversight. Our paper's **primary goal was to establish a robust framework for quantifying this effect, which is a crucial first step** in addressing the issue. Specifically, our findings (1) shed light on the limitations of LLM-as-a-judge systems, popular in many leaderboards [L195-204], (2) offer insights into open problems in weak-to-strong generalization, as highlighted by reviewer qUdH, and (3) reveal an inverse scaling phenomenon [2] where increasing capabilities exacerbates these issues. By quantifying the problem, we set the stage for future research on potential mitigation strategies, as discussed in Sec 7. We thank you for the valuable feedback, which has made our paper stronger! Please let us know if there are any further questions or concerns we can resolve to increase your support for our work. **References** [1] Huh, Minyoung, et al. "Position: The platonic representation hypothesis." ICML 2024. [2] McKenzie, Ian R., et al. "Inverse scaling: When bigger isn't better." TMLR (2023). [3] Bommasani, Rishi, et al. "Picking on the same person: Does algorithmic monoculture lead to outcome homogenization?." *NeurIPS* (2022) [4] Decentralized Arena via Collective LLM Intelligence, [https://de-arena.maitrix.org](https://de-arena.maitrix.org/) --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ time and effort spent in providing a response. After reading the response as well as considering the other reviews and the authors’ responses, I am willing to slightly raise my score.
Summary: The paper proposes a similarity metric between LLMs based on their logits, which measures the similarity of mistakes two models make on a task. The authors use this similarity metric to perform a variety of analyses. In particular they find similarity can predict scores from LLM judges, weak-to-strong generalization results, but also model capability predicts similarity, i.e., more capable models tend to be more similar to each other. Claims And Evidence: The authors make nuanced claims and I think all claims are well supported by evidence. Methods And Evaluation Criteria: Yes all experiments and analyses make sense. The LLM-as-a-judge experiments would benefit from looking at a wider range of settings / datasets. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: I looked at all experiments and results and did not find any major issues with the analysis. Overall I find the experiments comprehensive, convincing and clearly presented. The authors are very clear about the claims they make and how the experiments test these claims. The experimental rigour is significantly greater than for a typical ICML paper. Minor issue: I found the experimental setup for the weak-to-strong generalization experiments difficult to follow. Most relevant details are described in the appendix but I think the clarity in the main paper could be improved. A specific detail I wasn't able to understand from the paper is: "as shown in Table 3, the actual ceiling is significantly higher if complementary knowledge of the weak supervisor is fully leverage" Do you finetune the model first on ground truth labels and then on the outputs of the weak supervisor? Supplementary Material: I read Appendix C but nothing else. Relation To Broader Scientific Literature: The paper significantly improves upon previous work measuring model similarity, both in terms of the actual similarity metric proposed (which is a "method" contribution) as well as in the breadth and depth of the experimental analysis based on this metric. For example the weak-to-strong generalization experiments provide insides about some of the key open questions discussed in Burns et al. 2024. Overall, I think this paper is a strong contribution and will be of great interest to the ICML community. Essential References Not Discussed: None Other Strengths And Weaknesses: Some results around the weak-to-strong generalization experiments are difficult to understand from the main paper. Other Comments Or Suggestions: none Questions For Authors: * Could you explain the training of the complementary knowledge ceiling model in the weak-to-strong generalization experiments? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We are grateful for your strong support of our work. We are glad you that you found our claims nuanced and well-supported by comprehensive, rigorous experiments. On your question about how we compute the “elicitation $\cup$ complementary” ceiling estimate in Table 3, we should indeed have made this clearer in the main paper itself. We take the union of samples either the strong model fine-tuned on ground-truth labels (elicited) or the weak supervisor (complementary) get right. This is likely an overestimate of what can be realised from weak-to-strong training, but we think that is a lesser harm for a ceiling estimate. The previously proposed ceiling of ground-truth elicitation on the strong model ignores potential gain from complementary knowledge, which can be significant as indicated in Figure 4 and 12. We think such underestimation is worse for a ceiling estimate, as it might explain concurrent work [1] reporting >100% PGR, beating the “ceiling” of ground-truth elicitation on the strong model. Thus, we avoid estimating the ceiling by methods that can introduce any suboptimalities. For example, if we sequentially finetuned on ground-truth labels and then the weak supervisor's predictions or vice-versa, this could lead to catastrophic forgetting of the first part of training instead of fully combining their knowledge as desired. We agree with you that many important details about the weak-to-strong setup are in the Appendix. This is due to the limited space for the initial submission. In the camera ready, we will utilise the extra page available to shift setup details for the weak-to-strong experiments currently in Appendix C.1 and C.2 back to Section 4. We are happy to discuss any further questions about the weak-to-strong and other experiments! [1] Shi, Junhao, et al. "How to Mitigate Overfitting in Weak-to-strong Generalization?." *arXiv* (2025).
Summary: This paper investigates the challenges of AI oversight when using Language Models (LMs) to evaluate and supervise other LMs. It highlights how model similarity can undermine oversight, as similar models tend to make correlated mistakes and exhibit affinity bias—where they rate outputs from similar models more favorably. To address this, the authors introduce a novel metric, κp, that accurately measures the functional similarity of LMs based on overlapping mistakes while accounting for chance agreement and output probabilities. The study also demonstrates that greater diversity between supervising and student models enhances generalization and learning. The paper emphasizes the risk that, as LMs become more capable, their mistakes become more similar, which may amplify oversight blind spots. It calls for better diversity in oversight models, transparent reporting of model similarities, and more robust evaluation frameworks to ensure reliable AI oversight. Claims And Evidence: • Claim 1: “LLM judges exhibit affinity bias, favoring models similar to themselves.” It is supported with relatively convincing analysis. However, While Figure 2 suggests family-level bias, it lacks a clear statistical comparison between “same family” and “different family” pairs. • Claim 2: “Training gains from weak-to-strong generalization are higher when the supervisor and student models are less similar.” The authors provide analysis to support this claim. However, the paper’s conclusions about complementary knowledge are entirely dependent on the κp metric. Moreover, the selection criteria for “weak” vs. “strong” models are not transparently discussed. • Claim3 : “Model mistakes are becoming more similar with increasing capabilities, which is concerning for AI safety.” The study only shows a correlation between model capability and similarity of mistakes. There is no causal evidence to prove that increasing capabilities cause more similar mistakes. Additionally, it fails to account for key confounders that could explain the increasing similarity trend. For example, training data overlap: many LLMs use similar datasets, which naturally biases them towards similar mistakes. Methods And Evaluation Criteria: • While the authors provide evaluations in two different usage scenarios for LMs, the benchmark datasets are limited. In section 3, there is only dataset MMLU-Pro being used. • Additionally, there is no evaluation of the qualitative nature of mistakes such as factual errors, reasoning errors etc. Thus, it’s lack of insights to connect to/improve the real LMs applications. Theoretical Claims: • The paper lacks theoretical proofs explaining why κp captures probabilistic differences better than divergence metrics. • The derivation of chance agreement (cpexp) simplifies the assumption by uniformly distributing probabilities across incorrect options. However, it doesn’t consider tasks where wrong answers aren’t uniformly distributed or where some options are systematically more likely than others. Experimental Designs Or Analyses: • In section 3, the authors control for accuracy using partial correlations and regression but neglect other possible confounding factors such as: training data overlap. Models from the same family may have been trained on overlapping datasets, leading to similar outputs and thus higher judgment scores. • In section 5, there is no real-world experiment demonstrating how these similarities lead to oversight failures, so increasing similarity is a “safety concern” is questionable. Supplementary Material: Yes, I specifically pay attention to Appendix A Relation To Broader Scientific Literature: The κp metric extends traditional agreement metrics like: Cohen’s κ (Cohen, 1960): Measures inter-annotator agreement while adjusting for chance; Krippendorff’s α (Krippendorff, 1970): A more flexible metric for measuring agreement with chance correction; Error Consistency Metrics (Burns et al., 2022): Focused on measuring consistency in model mistakes. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: • Novel Contribution: κp Similarity Metric • Utilizes partial correlations and multiple regression to control for accuracy when analyzing judgment bias Weakness: • No rigorous comparison with alternative similarity metrics (e.g., KL Divergence, JSD, RSA). • Some confounders are not sufficiently controlled in the experimental designs, including: training data, architectural similarities. • Limited casual evidence in similarity trends • Lack of qualitative error analysis Overall, while the paper shows the strong motivation and good effort in experiment design, there are many critical information and analysis missing to convince the audience and provide better insights Other Comments Or Suggestions: • Figure 2 & 3 are really informative, but also overwhelming. It could be hard to comprehend all the details and interpret the analysis based on it. • For 3.2 Q1, it will be better to provide the table where the data is from. The single example is difficult to validate the analysis. Questions For Authors: As listed in previous parts and weakness section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are glad you liked the motivation and experimental design. We hope our new analyses address your concerns. **(W1)** > Comparison with alternative similarity metrics (e.g., KL Divergence, JSD, RSA) Divergence metrics have desirable information theoretic properties, but **we did not find any way to adjust the effect of higher accuracy inflating the similarity reported by these metrics** [L88-99 (R), Table 1]. We re-plot Figs. 6-8 with JSD, a symmetric and bounded variant of KL, to demonstrate this -- [new plots](https://imgur.com/a/XL7OBwH), e.g., the second plot studies 90% accurate models with independent errors, where $JSD > 0.75$, but as desired, $k_p \sim 0$. App. A4 discusses design choices for $\kappa_p$ by comparing with many alternate similarity metrics, including why we use probabilistic overlap instead of JSD [L1029-1031]. Here, we will also include that **alternative priors, if known, can be used to compute chance agreement $c^p_{exp}$** on *``tasks where wrong answers aren’t uniformly distributed''*, thanks for this suggestion. Finally, we focus on functional similarity metrics and not model-dependent representational similarity analysis (RSA) [L60-80 (R)]. **(W2, W3)** > Some confounders are not sufficiently controlled …, including: training data, architectural similarities; Limited casual evidence in similarity trends We agree deeper analysis of the causal factors is an important next step [L386-390 (R)]. **We tried our best not to make causal claims in the paper, and will rephrase any we overlooked**. We will acknowledge training data overlap as a potential reason for our observations. Unfortunately, training data mixtures are not known even for open-weight frontier models to control for this. As requested, we now exclude *``same-family pairs''* for Fig. 2 in [new plots](https://imgur.com/a/qKr2yi3). **Judge scores remain highly correlated** with similarity $r \in \[0.89, 0.95\]$. Further, [new table](https://imgur.com/a/y06LkCT) controls for *model size* as requested by reviewer nzM3. **Similarity still has high partial correlation**. For similarity trends with increasing capability, we already excluded same-family pairs in Fig. 5. In Sec 5.2 we mention how switching to **state space models (Mamba) does not reduce similarity**. App D.2 has more details and observations, showing **sample-hardness has a small effect on this trend, while instruction tuning exacerbates it**. We are happy to demonstrate further controls if you have any suggestions. > the benchmark datasets are limited. In section 3, there is only dataset MMLU-Pro In Sec 3, we were constrained to tasks which can be evaluated as both free-response and MCQ [L199-203]. This is why we cannot use benchmarks like BBH in Sec 5 and the 15 NLP tasks in Sec 4. **MMLU-Pro itself is quite diverse, and [new plots](https://imgur.com/a/grG06X1) show consistent results across its 14 domains ranging from law to engineering**. In fact, could this diversity lead to similarities on MMLU-Pro predicting judgements on more open-ended tasks? **[New plot](https://imgur.com/a/CIKQOmf) shows MMLU-Pro similarity has high correlation with LM judge based elos on the chat response dataset AlpacaEval 2.0** [1]. **(Evidence for W2S)** > conclusions about complementary knowledge are entirely dependent on the κp metric We forgot to add a pointer in our main paper to Fig 13 which shows that the **complementary knowledge conclusion consistently holds with 1 - JSD, though $\kappa_p$ explains more variance ($r=0.77$ vs $0.85$)** and has better normative grounding for analyzing model similarity trends as discussed in Sec 2. We will mention the *``selection criteria for weak vs strong models''* is model size, consistent with Burns et al. (2024) and Scherlis et al. (2024) in the revision. **(W4)** > no evaluation of the qualitative nature of mistakes. Qualitative analysis of mistakes is an interesting related, yet complementary direction [L360-369 (R)]. We respectfully disagree that this implies a *``lack of insights to connect to/improve the real LMs applications''*. Our quantitative measurement of similarity provides a **necessary foundation for applications like evaluating qualitative descriptions of model differences [L431-437 (R)], debiasing LLM-as-a-judge based leaderboards [L195-205]**, and many exciting directions for future work discussed in Sec 7. > For 3.2 Q1, provide the table where the data is from. [New tables](https://imgur.com/a/7XSrXt3) have the data of all 351 model pairs studied. To make this data easier to interpret, we reported it as scatter points in Fig 2. In the final version, we will link our **PyPi package and interactive tool** to aid readers in exploring similarities of chosen model pairs. Thanks! Your detailed feedback has helped us greatly improve the paper. We hope this increases your support for our work. [1] Dubois, Yann, et al. "Length-Controlled AlpacaEval: A Simple Debiasing of Automatic Evaluators." *COLM* (2024)
Summary: This paper introduces a probabilistic metric for model similarity that adjusts for chance agreement due to accuracy, distinguishes different types of mistakes, and incorporates confidences. Using this metric, the authors reveal three key insights: 1. LM judges demonstrate affinity bias, favoring models similar to themselves. 2. Greater gains in weak-to-strong generalization occur when training strong student models with annotations generated by weak supervisors that are more different from the student. 3. Model errors get more correlated as model capabilities increase, raising concerns about correlated failures in AI oversight. Claims And Evidence: Yes. While I couldn't verify all the materials in the appendix, the main results in the paper seem reasonable. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria proposed in the paper are reasonable. The proposed similarity metric is well-motivated, with the relation and difference compared to the existing metrics clearly explained. Theoretical Claims: The proofs in Appendix A are reasonable. Experimental Designs Or Analyses: The experimental designs and analyses that I reviewed are comprehensive and demonstrative of the claims made in the paper. Supplementary Material: No, I did not look into the supplementary material in detail. Relation To Broader Scientific Literature: This work introduced a probabilistic metric for model similarity that is built upon the existing metrics while addressing their limitations. Some insightful observations on AI oversight are provided based on the proposed metric, which further solidifies the broader impact of this work. Essential References Not Discussed: To my knowledge, the paper discussed the essential references in the field. Other Strengths And Weaknesses: Strengths: - The proposed similarity metric $\kappa_p$ is well-motivated and addresses the limitations of existing metrics. - The observation on negative correlation between weak-to-strong generalization and model similarity is quite insightful. - The observation on the potential risk of correlated failures in AI oversight is important and thought-provoking. (Minor) weaknesses: - The proposed similarity metric $\kappa_p$ seems to be tailored for MCQ tasks. It would be insightful if some discussions on possible extensions to other tasks (e.g., regression) could be provided, even just as future directions. - The negative correlation between weak-to-strong generalization and model similarity is arguably surprising. It would be helpful to provide more insights or possible explanations for this phenomenon. Other Comments Or Suggestions: Regarding the definition of "observed error overlap" on the right of line 87, it may be worth remarking why "the fraction of samples on which both models are correct or both models are wrong" is a more reasonable metric than things like "the fraction of samples on which the two models agree (in the multi-class setting)". Is the binary nature of the definition crucial here? Questions For Authors: Some questions are raised in the "Other Strengths And Weaknesses" section. I do not have further questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate that you found our metric well-motivated (with comparisons to alternatives), experiments comprehensive, and observations insightful. > The proposed similarity metric seems to be tailored for MCQ tasks. It would be insightful if some discussions on possible extensions to other tasks (e.g., regression) could be provided, even just as future directions. Indeed, our metric as described in the main paper is designed to address a key challenge in MCQ tasks—specifically, the lack of coherent classes needed to compute the marginal distributions for inter-annotator agreement metrics like Cohen’s κ [L976-979]. You might find Appendix A.3 interesting, where we extend the metric to classification, along with a brief discussion of challenges for exact match settings, for which we only provide a discrete version. On the other hand, for free-response tasks like creative writing, one could use both perplexity and model-based metrics, which have their own challenges as discussed in L392-403. That said, we ran experiments where we compute similarity on MMLU-Pro MCQs and plot Elo ratings assigned to evaluated models by making an LLM-judge pick between open-ended chat responses on AlpacaEval -- [New Plot](https://imgur.com/a/CIKQOmf). The strong correlation shows initial evidence that MCQ similarity might transfer across tasks as a predictor. For tasks like regression, for the observed agreement, $c_{obs}$, one could measure a distance metric over the two models' predictions, aggregating across samples. Once again, models with lower error would have lower distance in prediction, so the challenge lies in defining chance agreement, $c_{exp}$, for a model with a given error. We would have to make appropriate assumptions about the distribution of errors, such as gaussian errors, based on which $c_{exp}$ can then be computed. Thanks for this interesting question; We will add this discussion to Appendix A.3. We are excited about adapting the metric for other tasks in future work! > The negative correlation between weak-to-strong generalization and model similarity is arguably surprising. It would be helpful to provide more insights or possible explanations for this phenomenon. The result can seem surprising if we view weak and strong models purely through the lens of accuracy. This is where we think our framing of model similarity (or difference) at a sample level is insightful! Lower accuracy does not imply that the knowledge of weak models is a strict subset of stronger ones. Rather, weak models can have complementary knowledge, and we hypothesise the transfer of this also contributes to weak-to-strong generalization. Model similarity provides a way to measure complementary knowledge in terms of the difference in samples they get right. Lesser complementary knowledge to transfer might be the explanation for the seemingly surprising trend of lower weak-to-strong generalization when model similarity is higher (negative correlation). We tried to motivate this in L248-260. We will be sure to utilise the extra page allowed in the revision to expand this section, by including content currently in Appendix C.1 and C.2. > “Regarding the definition of "observed error overlap" on the right of line 87, it may be worth remarking why "the fraction of samples on which both models are correct or both models are wrong" is a more reasonable metric than things like "the fraction of samples on which the two models agree (in the multi-class setting)". Is the binary nature of the definition crucial here?” In L87, we are stating the definition used for error consistency defined in Geirhos et al. (2020). Our own metric modifies this to measure “the fraction of samples on which two models agree”, just as you proposed. We agree it’s a better definition, as it distinguishes differing mistakes [L129-133]. As you correctly noticed, our metric $\kappa_p$ is equivalent to error consistency (Geirhos et al.) for binary classification [L894-896, Appendix A.1]. Great minds do think alike ;) Thanks for your question and suggestions. We hope our response increases your support for our work and we are happy to discuss further! --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ detailed responses to my questions. After considering the other reviews and the authors’ responses, I remain convinced that this work offers strong empirical evidence and valuable insights. I will maintain my current evaluation.
null
null
null
null
null
null
Multi-Armed Bandits with Interference: Bridging Causal Inference and Adversarial Bandits
Accept (poster)
Summary: This paper is the first to study MAB with interference. The learning model in this paper requires each node to take the same action and assumes that the interference intensity decays with distance. The paper proposes an EXP3-IX algorithm based on exposure mapping, achieving a high-probability regret upper bound. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I have checked most of the proofs and found that they are correct. Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: N/A Essential References Not Discussed: The vast majority of relevant references have been discussed. Other Strengths And Weaknesses: **Strengths:** 1. This paper proposed the first study on MAB with interference. 2. This paper proposes an algorithm that integrates exposure mapping. By leveraging exposure mapping and the assumption on interference intensity, this algorithm optimizes the performance of the general switchback strategy combined with EXP3-IX. 3. The experimental section of the paper is relatively comprehensive. **Weaknesses/Problems:** 1. The regret definition in the paper only considers the optimal switchback policy, where each node takes the same action. This somewhat reduces the problem's difficulty, and the authors do not provide a clear motivating example from real-world applications. 2. The subheadings in the paper are somewhat confusing. Based on my understanding, a simple combination of EXP3-IX and the switchback policy is already sufficient to achieve a high-probability regret upper bound. I am not entirely sure why the authors named Section 3 "Expected Regret"; it might be because this chapter primarily establishes an expected regret lower bound. As for Section 4, I find "High Probability Regret" is also inappropriate, as obtaining a high-probability regret bound is fairly straightforward. From my perspective, the main contribution of the paper is integrating the exposure mapping mechanism into EXP3-IX and leveraging the assumption of interference attenuation to derive a regret bound that is tighter than the one obtained by directly applying EXP3-IX with the switchback policy. 3. The discussion of stochastic and adversarial settings in the related work section is somewhat controversial. In the stochastic case, it is typically assumed that the true reward is perturbed by a 1-sub-Gaussian noise, making the reward unbounded. In contrast, the adversarial case usually assumes that the reward is bounded. Given the setting of this paper, I believe the authors should state that their work and the referenced papers consider different settings, rather than claiming that the adversarial setting is more general than the stochastic one. 4. The authors do not provide motivated examples/references for the proposed Decaying Interference Property assumption. Other Comments Or Suggestions: ## update after rebuttal: The authors have addressed my concerns, so I recommend acceptance. Questions For Authors: See the Strengths And Weaknesses section Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We appreciate the feedback and would like to share our responses below. *1. Definition of Regret:* Alternatively, one could define the benchmark to be the optimal "personalized" treatment assignment from a function class $\mathcal{F}$. In this case, our analysis still applies—for instance, the expected regret bound scales with a factor of $\sqrt{|\mathcal{F}|}.$ We note that this "personalized" benchmark was used in Agarwal et al 2024, however, the cost paid for choosing this stronger benchmark is that the environment was assumed to be **stationary**. It’s also worth noting that the same critique could be made of many foundational works in causal inference, which focus on the ATE effect between all-1 and all-0 policies. While these benchmarks appear simplistic, they are far from trivial and have led to a rich body of work in causal inference. Of course, policy learning - finding a good mapping from covariates to treatment — is a natural extension. But it is viewed as a more advanced goal, and builds upon insights from the fixed-policy setting, analogous the one in this work. *2. Expected regret vs h.p. regret:* It is true that a h.p. bound can be obtained by treating the entire system as a single “unit” and applying h.p.bounds for EXP-IX (as we discussed at the end of the intro and the start of section 4). However, this approach completely ignores N, and as a result, the tail probability does not vanish with increasing N. This dependence on N is crucial, especially in practical applications where $N \gg T.$ In fact, we dedicate Section 4.5 specifically to discussing this distinction. From a practical standpoint, a “pure” switchback policy—where the entire system flips between treatment and control—is rarely used. Instead, tech firms typically partition the space into clusters and independently assign treatment or control to each cluster in each period. This is precisely what our algorithm is designed to capture. A concrete example is DoorDash’s use of “clustered” switchback experiments: https://careersatdoordash.com/blog/experiment-rigor-for-switchback-experiment-analysis/ In this sense, a key contribution of our work is to lay the **theoretical foundation for a policy class that are widely deployed in industry.** While we mentioned the phrase “vanishing in N” in the abstract and other parts of the paper, we agree that this could have been more clearly emphasized. If the paper is accepted, we will revise accordingly to better highlight this point. *3. Discussion of stochastic and adversarial bandits in the literature review.* This is a fair point. To clarify, our claim that the adversarial setting is more general than the stochastic one assumes that the reward distribution has bounded support. That said, for the purposes of algorithm design and analysis, assuming sub-Gaussian noise with bounded mean behaves similarly to the bounded reward setting (e.g., Bernoulli rewards). *4. Examples/references for the Decaying Interference Property* As we mentioned right before Defn 2.3, this Decaying Interference Property (DIP) is **borrowed from Leung 2022**. The best way to motivate the the DIP is by noting that it generalizes several natural interference models that are well-known in the literature, including: - (a) SUTVA (see our remark 2.4 right after def 2.3); - (b) k-neighborhood model, where two units interfere with each other if there distance is at most k; - (c) Cliff-Ord autoregressive model, where the treatment effect on unit u is a linear combination of the treatment effects on its direct neighbors (plus base effects). In this case, we have $\psi(r) = r^{-2}.$ --- Rebuttal Comment 1.1: Comment: Thank you for your reply. The motivation regarding the switchback policy setting is no longer a concern. The authors may consider including these discussions in a future version of the paper. I still have some questions: When you mentioned "utilizing N," were you referring to the decay assumption? If this assumption is relaxed, do the properties discussed in the paper still hold? In addition, I find the assumption $N \gg T$ to be unrealistic in typical online settings. In most bandit tasks, the time horizon $T$ usually exceeds 10,000. If we assume $N \gg T$, say $N = 100000$, it would be extremely difficult to collect data from such a large number of units in real-time. Moreover, the case mentioned by the authors, where $\delta = e^{-\alpha T}$, is rarely considered in the bandit literature. In fact, most works assume a fixed confidence level with $\delta = \Theta(1)$, and the regret upper bound is typically analyzed as $\widetilde{O}(\sqrt{KT})$. Does the setting $\delta = e^{-\alpha T}$ have any particular significance in the context of MABI? Minors: Could the authors clarify the scale of the x-axis and y-axis in the experimental plots shown in Section 4.5? --- Reply to Comment 1.1.1: Comment: Q: *"When you mentioned "utilizing N," were you referring to the decay assumption?"* A: Not exactly. In fact, the decay function $\psi$ may not depend on N (e.g. in the spatial interference setting, \psi is a function of the max distance that two users can interfere). By "utilizing N", we meant that better statistical performance can be achieved (e.g. for estimating treatment effects) when N is large; for example, many work in causal inference consider T=1, and the variance of their estimators decrease in N. In contrast, switchback don't "utilize N" since they view the entire system as a single unit. Q: *"In addition, I find the assumption unrealistic in typical online settings.* A: good point. Here, a "round" can be interpreted as the amount of time we wait before we are allowed to change the treatments. In practice, its length can range from hours to months, and so T is typically hundreds/thousands; whereas N, the number of users, can be on the **millions**. Q: *"If we assume $N\gg T$, say $N=10^5$, it would be extremely difficult to collect data from such a large number of units in real-time."* A: True, but this actually **highlights** the advantage of our cluster-randomization approach - instead of collecting data from each user, all the policy needs is **cluster-level statistics** (e.g. mean revenue from a ZIP code region). Handling such data is much easier. Q: *"Moreover, the case mentioned by the authors, where $\delta = e^{-\alpha T}$, is rarely considered in the bandit literature. In fact, most works assume a fixed confidence level with $\delta=\Theta(1)$"* A: This is a good point. However, even when $\delta = \Theta(1)$, our approach still achieves an **asymptotically lower VaR**. To make this concrete, consider spatio-interference where two units interfere if their distance is at most $\kappa = O(1)$. By Corollary 4.8, the VaRs of our algorithm and the switchback policy scale as $\sqrt{ \frac{T}{N} \log \frac{1}{\delta} }$ and $\sqrt{ T \log \frac{1}{\delta} }$. In particular, if we take $N = T^2$ and $\delta = \Theta(1)$, then our algorithm’s VaR is $\sqrt{1/T}$, while the switchback policy’s VaR is $\sqrt{T}$; this is a sharp contrast as one is decreasing in T while the other is increasing. Q: *"Minors: Could the authors clarify the scale of the x-axis and y-axis in Section 4.5?"* A: in Fig 2, the x-axis is T, and the y-axis is the $(1-\delta)$-VaR of the regret, which is roughly $$E[{\rm regret}] + O(\sqrt {\log 1/\delta})\cdot {\rm Variance \ of\ regret}.$$ In Fig 3, we consider an alternative perspective: the y-axis is the "excess regret", defined as $${\rm True\ regret} - {\rm Minimax\ optimal \ regret}.$$ A **key distinction** is that the excess regret may be **decreasing** if we fix a relationship between $T,N$ such that $N\gg T$ (e.g. $N=T^2$)
Summary: The authors study a multi-armed bandit problem where interference (treatment of one arm affects the outcome of others) exists. The authors theoretically prove that a switchback policy achieves optimal regret. They provide a novel method based on clustered randomization and prove that the regret of the proposed method is both optimal in expectation and vanishing in N with high probability. Claims And Evidence: The authors concisely described key ideas of their proofs. They highlighted key steps and provided detailed arguments in the main paper. Methods And Evaluation Criteria: Yes, the proposed method clearly addressed the shortcomings of existing methods. To address the drawback of switchback policies that rates do not vanish as N grows, they propose a policy that has a regret bound vanishing in N. To achieve this, they integrate the truncated HT estimator into the EXP3-IX framework. There are two main challenges: the uniform spatial clustering is not robust since arms may have very low probabilities; it is unclear how to select the IX parameter in the batched setting. Theoretical Claims: The authors establish an upper bound restricted to switchback policies. They further establish a lower bound showing that rates cannot be improved by leveraging N using more complicated policies. The then prove the regret bound for their proposed method. I did not find any error in the theoretical claims. Experimental Designs Or Analyses: The results showcase the VaR of the regret. Although the experimental results are limited, I understand the focus of the paper is on theory and thus experiments are not the main contribution. Supplementary Material: The supplementary material is not checked. Relation To Broader Scientific Literature: The paper clearly discussed relevant literature. Essential References Not Discussed: I did not find any essential references not discussed. Other Strengths And Weaknesses: The paper is well written and easy to follow. Other Comments Or Suggestions: How does the choice of the learning rate and the IX parameter beta affect the performance of the proposed method? Could the authors verify empirically that the proposed method is robust against choices of hyperparameters? Questions For Authors: Please see my comments above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your feedback. *"How does the choice of the learning rate and the IX parameter beta affect the performance of the proposed method?"* Good question. Intuitively, $\beta$ controls the forced exploration of arms with low score. This aligns very well with Theorem 4.6, where $\beta$ appears in both terms B and C. In fact, B corresponds to the tail mass, and a small $\beta$ (which means less exploration) would cause the tail mass to go up. On the other hand, C is the regret caused by the bias (due to both interference and bias in the estimator). If $\beta$ is away from the "sweet spot", the HT estimator has a high bias. Thanks again for this question, which helps improve the clarity of our results. If accepted, we will add these discussions to the paper.
Summary: The paper presents optimal expected regret bound and presents a high-probability regret bound in presence of correlated rewards among the arms. post-rebuttal: I wish to thank the authors for the reply. I will keep my score. Claims And Evidence: Claims are well-supported by evidence. Methods And Evaluation Criteria: I am unsure about how the distance-decaying interference assumption connects to other definitions of interference in the causal inference literature. Theoretical Claims: I have not checked the correctness of any proofs for theoretical claims. Experimental Designs Or Analyses: Experimental design and analysis make sense to me. Though the legends in benchmarks e.g. EXP3-SB is not well-explained -- specifically which method is this? Supplementary Material: I have not reviewed the supplementary material. Relation To Broader Scientific Literature: It is not completely clear methodology-wise what is the essential difference between stochastic bandits that consider possibly more general notions of interference and the present paper that considers a specific notion of interference and adversarial reward. Essential References Not Discussed: Relevant literature is discussed. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: There seems to be a "depends" missing in the abstract "The reward of each unit on the treatments of all units, and this dependence decays in distance.' Questions For Authors: How does the computational complexity analysis of the proposed algorithm look like? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the feedback. Q: *"It is not completely clear methodology-wise what is the essential difference between stochastic bandits that consider possibly more general notions of interference and the present paper that considers a specific notion of interference and adversarial reward."* A: Good question. There are two disadvantages of the stochastic (i.i.d. reward) bandits analog: - Practicality: Real-world A/B testing involves highly dynamic and heterogeneous environments; - Technicality: If the rewards are iid, the problem wouldn't be too different Leung 2022. In fact, all we need to do is to combine their low-error estimator with a known paradigm for stochastic bandits, such as UCB. That said, we do agree that the inclusion of a dedicated section on the stochastic version would provide a more complete picture, and we will do so if accepted. Q: *"How does the computational complexity analysis of the proposed algorithm look like?"* A: The bottleneck in terms of the time complexity lies in computing the estimator. Fix an arm. For each unit $u$, we need to compute the exposure mapping probability. this involves examining the arm assigned to each of the $O((r/\ell)^2)$ clusters that are close to $u$. Therefore, the total run time per round is $O(kN (r/\ell)^2)$. (Note that $r/\ell\le \sqrt N$)
Summary: The authors combine Auer's EXP3 policy framework, the Horvitz-Thompson IPW (inverse propensity weighting) estimator, along with implicit exploration, and a clustered randomization scheme, in order to achieve the optimal $O(\sqrt T)$ regret bound ($T$ being the horizon), while admitting a high-probability bound that vanishes with increasing $N$ (the number of experimal units), under a spatially decaying interference property / assumption. Claims And Evidence: Theorems 3.3 on the regret for any MABI (including EXP3-based switchback) approach and Theorem 4.6 on the high-probability regret bound for the proposed Algorithm 1, supported via corollaries 4.7 - 4.9, appear to be the main theoretical claims in this paper. The authors plot the VaR (Value at Risk ?) for their regret bounds against $T$ (the horizon) and $N$ (the number of experimal units), in Fig. 2 and Fig. 3, respectively, in order to demonstrate desirable properties of their regret bounds. Methods And Evaluation Criteria: The proposed method / algorithm (Algorithm 1) combines several known approaches mentioned under "Summary" in order to obtain novel theoretical results mentioned under "Claims and Evidence". For a theoretical paper such as this one, validation via extensive proofs in the main body of the paper and the appendices appear to be appropriate evaluation criteria. Theoretical Claims: Please refer to the theoretical claims mentioned under "Claims And Evidence". This reviewer didn't check the proofs for Theorems 3.3 and 4.6 in detail. Experimental Designs Or Analyses: No experimental results are presented in this paper. Supplementary Material: There doesn't appear to be any supplementary material besides the appendices. Relation To Broader Scientific Literature: Please refer to the reference mentioned under "Questions For Authors". Essential References Not Discussed: On the topic of cluster-based randomized experimentation, the authors may also wish to cite "Detecting Network Effects: Randomizing Over Randomized Experiments", by Saveski, et al. Other Strengths And Weaknesses: The paper is clearly written and the technical assumptions are clearly stated. The robust random partition is illustrated quite well in Fig. 1. Other Comments Or Suggestions: In the Impact statement on lin 419, "focus cumulative" should be "focus on cumulative". Questions For Authors: In the classic reference "Logarithmic regret algorithms for online convex optimization", by Hazan et al., it is shown how $O(\sqrt T)$ regret bounds can be converted into $O (log T)$ bounds via stronger assumptions. Have the authors considered a similar approach of using stronger assumptions ? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your feedback! Q: *"In the classic reference 'Logarithmic regret algorithms for online convex optimization' by Hazan et al., it is shown how regret bounds can be converted into ..."* A: This is an interesting thought. First, their result is not applicable to this work, since our reward functions are defined on a discrete domain and thus we can not define convexity (at least in the ordinary sense). But your intuition is right - if the reward function has a special form, such as linear in the neighbors treatment assignments (e.g. Cliff-Ord model) then there exists an estimator with lower error, and this would lead to better regret guarantees. This can be an interesting direction for future work. Q: *On the topic of cluster-based randomized experimentation, the authors may also wish to cite "On the topic of cluster-based randomized experimentation, the authors may also wish to cite "Detecting Network Effects: Randomizing Over Randomized Experiments", by Saveski, et al."* A: Yes, this is a relevant work; we will cite it in the updated version. --- Rebuttal Comment 1.1: Comment: I wish to thank the authors for the response and the proposed revision. I will retain my original "weak accept" rating.
null
null
null
null
null
null
Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead
Accept (poster)
Summary: This paper propose a method to efficiently serve numerous (thousands) of LoRA adapters for large language models. They propose a joint diagonalization-based compression method that significantly reduces storage and serving overhead while preserving model performance. To effectively scale the LoRA to the advertised thousands, they further propose a clustering strategy to solve the bottleneck from r. In their experiment, they train 1000 LoRA adapters to demonstrate their approach preserves most of the performance and achieves an improvement on throughput compared with the vLLM multi-LoRA solution. Claims And Evidence: Yes, claims made in the submission supported by clear and convincing evidence (experimental results). Methods And Evaluation Criteria: Yes, the proposed method make sense for the problem or application at hand (achieving improvement on throughput and storage while preserving most of the performance). Theoretical Claims: The proposed JD method has notable limitations due to its reliance on a shared basis, potentially restricting its effectiveness with orthogonal or highly uncorrelated LoRA adapters. Corollary 1 indicates that when LoRA update matrices are orthogonal, reconstruction error can become significant. The approach implicitly assumes some level of similarity among adapters, which is not just a trade-off but a limitation in the method’s applicability to diverse scenarios. Experimental Designs Or Analyses: The proposed method is tested on a task suite (Wang et al., 2022) with over 1000 tasks but it is only tested on a Mistral model. Supplementary Material: No supplementary material is submitted Relation To Broader Scientific Literature: 1. LoRA literature, while original LoRA considers single task scenario. 2. The proposed JD approach is related to model compression and merging 3. Efficient inference method such as vLLM. Essential References Not Discussed: No Other Strengths And Weaknesses: It might be difficult for adding a new LoRA adapters on the compressed old LoRAs (the trained up and down projections might be specific to the old LoRAs) Other Comments Or Suggestions: No Questions For Authors: How to route the different inputs to those LoRA adapters, do we need a classifier? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the review. Please see our responses below. --- ## 1. Shared Basis and Clustering Approach Our method relies on a shared basis for each cluster. Although this approach implicitly assumes some similarity among LoRA adapters, the clustering strategy is designed to work effectively even when reconstruction errors (as measured in the L2 sense) are high. This is because the merging effect within clusters—akin to averaging weights—preserves high performance, as evidenced by our experiments on 1000 LoRAs across multiple languages. ## 2. Evaluation on Additional Model Architectures While our current evaluation is performed on a Mistral model, prior work (including the original *LoRA: Low-Rank Adaptation of Large Language Models* paper) shows that LoRA operates similarly across various transformer-based architectures. Nonetheless, we can run experiments on an additional LLM before the camera-ready version to further substantiate the generalizability of our approach. ## 3. Incorporating New LoRA Adapters Adding new LoRA adapters to an already compressed set can be challenging because the trained up and down projections are tailored to the existing adapters. Our recommendation is to rerun the joint compression algorithm when new LoRAs are introduced. As we mention in Section 6.5, this can be managed via batched cron jobs (e.g., on a daily schedule), where new LoRAs are initially compressed as individual clusters and then incorporated into the overall joint compression process. ## 4. Routing and Selection of LoRA Adapters The deployment framework is designed such that the server maintains the full collection of LoRA adapters. Each request includes the identifier of the specific LoRA to be used, thereby eliminating the need for an additional classifier to route inputs. This straightforward mechanism ensures that the correct adapter is selected for each task. --- We hope these clarifications address your concerns regarding the reliance on a shared basis, the evaluation on a single model, the integration of new LoRA adapters, and the routing mechanism for adapter selection.
Summary: This work focuses on a multi-LoRA serving system and significantly enhances throughput. The key approach involves compressing a collection of LoRA adapters to share a common low-rank space. This joint compression effectively reduces the total number of parameters during inference, leading to improved serving efficiency. Experimental results demonstrate that the end-to-end throughput is significantly improved while maintaining model performance. ### Update after Rebuttal ### My concerns have been addressed by the responses. I keep my original score. Claims And Evidence: Yes, the method is evaluated based on both end-to-end throughput and performance, demonstrating its effectiveness. Methods And Evaluation Criteria: yes. Theoretical Claims: I reviewed Theorem 1 and Corollary 1, and the results are valid, providing theoretical support for the proposed method. Experimental Designs Or Analyses: Both the thoughput and performance evaluation is sound. Supplementary Material: N/A Relation To Broader Scientific Literature: The work is highly related with multi-lora serving system. Notable literatures like S-LoRA and Punica. Essential References Not Discussed: No Other Strengths And Weaknesses: - For the clustering strategy, how to better decide the optimial number of clusters? - If the distribution of different LoRA models varies significantly, will it impact the effectiveness of joint compression? Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the review. Please see our responses below. --- ## 1. Deciding the Optimal Number of Clusters Please see Section 6.5 for hyperparameter recommendations. Determining the optimal number of clusters does require some hyperparameter tuning; however, our experiments indicate that the method is not highly sensitive to this parameter. In **Appendix G**, we outline a practical tuning procedure that leverages the L2 reconstruction error from a single LoRA module to guide the selection of an appropriate number of clusters for the entire collection. This approach simplifies the tuning process and ensures robust performance. ## 2. Impact of LoRA Model Distribution on Joint Compression The motivation for introducing the clustering strategy was based on the observation that joint compression is considerably more sensitive to variations in the LoRA models when performed directly. In our experiments—covering 1000 LoRAs across diverse tasks and languages—we found that the inherent variability of the LoRA models can negatively impact compression effectiveness. By clustering the LoRAs, the impact of this diversity is significantly reduced, thereby stabilizing and enhancing the performance of the joint compression. --- We hope these clarifications address your concerns regarding the optimal clustering strategy and its role in mitigating the effects of LoRA diversity. We appreciate your insightful feedback. --- Rebuttal Comment 1.1: Comment: Thank you for the responses that addressed my concerns. I will maintain my original positive score.
Summary: The paper addresses the challenge of efficiently serving large numbers of LoRA adapters in real-time inference settings. Existing solutions require frequent loading and offloading of LoRAs due to limited GPU memory. The authors introduce a joint compression technique where multiple LoRAs are compressed into a shared basis with LoRA-specific scaling matrices. This reduces memory footprint and improves throughput. Claims And Evidence: 1. "Throughput Improvement (1.6× Speedup) & Memory Efficiency". Supported. 2. "compressed LoRAs retain up to 99% of original performance." Supported in Figures 2 and 3. 3. "Compression Enhances Generalization" No. There is no clear causal explanation for why compression might enhance generalization. Methods And Evaluation Criteria: 1. For the evaluation part, the Figure 4, to compare Throughput, why not also compare the SOTA s-lora? Even s-lora needs to reload the lora adapters. 2. Could authors also clarify the evaluation hardware platform? 3. Could authors also clarify the overhead of compression? Theoretical Claims: I think there are no issues. Experimental Designs Or Analyses: I think there are reasonable. Supplementary Material: I go over all parts of the supplementary material. There is no issue in supplementary material. Relation To Broader Scientific Literature: 1. It contributes to Model Compression and Matrix Factorization. It extends classical compression methods by introducing joint diagonalization (JD), which compresses multiple LoRAs simultaneously instead of handling them individually. 2. It integrates compression into LLM serving—unlike previous methods that focus on better memory allocation, scheduling, or kernel optimizations. Essential References Not Discussed: Related works that are enough for understanding. Other Strengths And Weaknesses: Strengths: It proposes 1. Novel Compression Method for Serving LoRAs. 2. Clustering-based compression allows efficient inference. Weaknesses: 1. While the method scales to 1000+ LoRAs, its feasibility for 100,000+ LoRAs is not explored, which could be relevant for large-scale commercial deployments. 2. Choosing optimal rank and number of clusters requires hyperparameter tuning, which may increase complexity in real-world deployments. Other Comments Or Suggestions: No other Comments. Questions For Authors: 1. For the evaluation part, the Figure 4, to compare Throughput, why not also compare the SOTA s-lora? Even s-lora needs to reload the lora adapters. 2. Could authors also clarify the evaluation hardware platform? 3. Could authors also clarify the overhead of compression? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the review. Please see our responses below. --- ## 1. Throughput Comparison and SOTA s-LoRA The vLLM multi-LoRA baseline in our experiments already incorporates advanced optimizations such as efficient scheduling and non-blocking CPU-GPU communication when swapping LoRAs, as well as techniques from S-LoRA. Consequently, the throughput comparisons in Figure 4 inherently reflect these state-of-the-art improvements without the need for an additional baseline comparison to S-LoRA. ## 2. Evaluation Hardware Platform Experiments were conducted on an **H100 80GB GPU** with memory consumption capped at 40%. This setting is intended to simulate scenarios where service providers might serve many LoRAs on more economical hardware with lower memory capacity than high-end GPUs. ## 3. Compression Overhead In **Figure 5** of the Appendix, we present detailed measurements of memory load, data transfer time, and forward-pass performance for both standard LoRA and our joint diagonalization approach (JD-LoRA). This analysis provides a clear view of the compression overhead and its impact on overall performance. Please note that the whole compression process can be done on CPU without interfering with serving of the LLMs. E.g. new LoRAs are temporarily compressed using SVD individually, and then a cron job, say, runs once a day, to compress the full set of LoRAs using joint compression. ## 4. Scalability to 100,000+ LoRAs While our current work scales to 1000+ LoRAs—currently the world’s largest open collection of LoRAs with documented training parameters—we believe that scaling to 100,000+ LoRAs is feasible in principle by scaling the number of clusters in our algorithm. ## 5. Hyperparameter Tuning Please see Section 6.5 for hyperparameter tuning. Although choosing the optimal rank and number of clusters requires some hyperparameter tuning, our experiments indicate that the method is not overly sensitive to these choices. In **Appendix G**, we describe a practical tuning procedure that leverages the reconstruction error (in the L2 sense) from a single LoRA module to efficiently determine appropriate hyperparameters for a collection of 1000 LoRAs, thereby reducing deployment complexity. --- We hope these clarifications address your concerns regarding throughput, hardware, compression overhead, scalability, and hyperparameter tuning. We appreciate your feedback and believe these additions significantly strengthen our work.
Summary: The authors propose a method that efficiently handle the problem of serving thousands of LoRA adapters for LLMs when apply on many tasks by compressing them into a shared basis with LoRA-specific scaling matrices. With number of LoRA become larger, to scale further, they use clustering-based compression, reducing memory usage while preserving performance. Their approach improves inference throughput by 1.6 times in vLLM, achieving 80% of the speed of a single LoRA while handling thousands of adapters efficiently. Claims And Evidence: + The paper provides some theoretical analysis to understand the role of the joint diagonalization method and from that understand how it motivates the clustering approach. Their analysis point out that well-clustered in LoRAs make reconstruction error low and vice versa. They also conduct many experiments to convince their claims. + However, the paper does not show why certain LoRAs cluster well together (is it based on task similarity, weight structure?). Methods And Evaluation Criteria: I think the proposed methods and evaluation criteria are appropriate for the problem of multi-LoRAs serving. The joint diagonalization (JD) and clustering-based compression effectively address memory constraints and inference efficiency, while preserving the performance. The evaluation on 1000 LoRAs (for 1000 tasks) and throughput in vLLM aligns with real-world multi-LoRAs systems. Theoretical Claims: I think the proofs for all theorems in the paper is correct and make sense. Experimental Designs Or Analyses: This paper propose a new method that help to compress many LoRAs adapters to save memory usage but not decrease too much in performance. To validate the benefits of proposed method in compression and performance, reconstruction error, they train LoRAs on 1000 natural instruction tasks. After that, they visualize clearly the performance of compressed LoRAs relative to uncompressed ones. They also visualize the relation between reconstruction error and relative performance and see that with JD-clustering, reconstruction error is even less critical for performance. Therefore, I think the design of experiments in this paper is make sense to clarify the benefits of proposed method. Supplementary Material: This paper does not have supplementary material. Relation To Broader Scientific Literature: The paper improves LoRA scalability by introducing joint diagonalization (JD) and compression based on clustering, which help reduce memory overhead while preserving performance. It builds on LoRA (Hu et al., 2021) and multi-LoRA inference (S-LoRA, Sheng et al., 2023) but surpasses them by enabling efficient serving of thousands of adapters. Inspired by SVD-based compression (Meng et al., 2024), it groups (through clustering) and compresses LoRAs jointly, improving throughput 1.6× over vLLM without decreasing too much in performance compared to uncompressed LoRAs. This work bridges PEFT, model compression, and scalable inference, open for future optimizations in multi-LoRA systems. Essential References Not Discussed: I see some prior works related to multi-adapter systems in LLM but are not refer in paper, such as [1], the author might need to compare their method with this one. Except that, I think the paper referred enough related works for understanding the key contributions of the paper. [1] Chameleon: Adaptive Caching and Scheduling for Many-Adapter LLM Inference Environments. Arxiv 2024 Other Strengths And Weaknesses: Strengths: + The paper proposed a novel method that can help compress multi-LoRAs efficiently without losing performance too much. This is very useful for edge systems. + They provide detailly the theory supporting for the method. + They conduct experiments on 1000 natural instruction tasks (1000 LoRAs adapters) to show the benefits of the method on compression, reconstruction error vs performance, and throughput, with all show good results. Weaknesses: + While clustering improves compression efficiency, the paper lacks theoretical analysis on why certain LoRAs cluster well. + They should provide more on whether clustering LoRAs in multi-domain cases (LoRAs are trained on different domains) can effect the performance to strengthen the findings. + I think task diversity is a little bit limited. Other Comments Or Suggestions: I do not have any suggestion. Questions For Authors: I do not have any question. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the review. Please see our responses below. --- ## 1. Clustering Basis: Weight Structure vs. Task Similarity Our approach relies solely on the LoRA weights, meaning that the clustering is driven by the intrinsic weight structure rather than an explicit measure of task similarity. While we recognize that the link between weight structure and task similarity is an area ripe for further exploration, our focus is on how these weight patterns facilitate compression. ## 2. Theoretical Analysis of Clustering Behavior Our theoretical section argues that a tight clustering of LoRAs is not a prerequisite for success. Indeed we expect (and observe) very large reconstruction errors (in the L2 sense) for the LoRA weight matrices. Instead, we argue that in addition to simple weight matrix recovery, there is also a very important effect of our approach more akin to merging of LoRAs by averaging their weights (see the later parts of the theoretical section). We believe this explains why the observed LLM outputs do not degrade at all until the 2-norm reconstruction error becomes very large indeed (see Figure 3). This is intuitive given the inner product structure of the attention mechanism in transformers. Nonetheless, as demonstrated in the merging literature, combining too many LoRAs eventually leads to degradation. This insight motivated our strategy of performing joint diagonalization on smaller, independent clusters. Another indication for the diversity is the low results of TIES merging (Appendix H.3), which might have succeeded otherwise. ## 3. Multi-Domain Clustering and Task Diversity Our experiments include LoRAs trained across multiple domains and languages, which reinforces the importance of robust clustering in multi-domain settings. The diversity of our task set is further underscored by **Table 3** in the Appendix, which details all 1000 tasks sourced from *Super-Natural Instructions: Generalization via Declarative Instructions on 1600+ NLP Tasks*. Despite the high reconstruction errors—indicating that the LoRAs are not overly similar—the clustering process effectively exploits the underlying structure, leading to improved compression efficiency. --- We hope these clarifications adequately address your concerns regarding the clustering rationale, the theoretical underpinnings of our approach, and the diversity of our task set. We appreciate your feedback and are open to any further suggestions. --- Rebuttal Comment 1.1: Comment: Thank you for the responses. I am satisfied with the rebuttal and increase my rating.
Summary: This paper introduces a novel framework for efficiently managing a large set of LoRA adapters. The authors present a joint diagonalization based (JD) algorithm in both a full and a diagonal variant, which compresses multiple LoRA weights by decomposing them into a shared basis and adapter specific scaling matrices. For scenarios with a large number of adapters, the paper proposes a clustered JD algorithm to enhance scalability. In addition, the paper provides a theoretical analysis of the reconstruction error for JD full. Experiments on 1000 natural instruction tasks demonstrate that the proposed compression strategy can improve throughput while preserving model performance. Claims And Evidence: - The submission’s claims regarding performance preservation and throughput enhancement are supported by experimental findings involving varying numbers of LoRA adapters, in some cases up to 1,000. - Results from the JD-cluster, which integrates a large number of LoRA adapters, demonstrate the scalability of the proposed methods. - Theoretical analysis of error bounds on the reconstruction error are provided. Methods And Evaluation Criteria: - The proposed JD algorithm is both novel and grounded in a sound theoretical framework. Employing JD-cluster to compress a large number of LoRAs is appropriate for the problem at hand. - However, while the authors conduct experiments on 1,000 natural instruction tasks, it remains unclear whether these tasks and their corresponding LoRA adapters are sufficiently diverse. It is possible that correlated tasks and LoRA weights inflate the observed performance improvements, particularly given that the compressed LoRAs can surpass the originals (as evidenced in Figure 2). To address this concern, it would be valuable to provide additional analysis of LoRA weight diversity (e.g., through suitable visualizations) and to test the method on a wider variety of tasks. - Additionally, the reliance on the Mistral-7B-Instruct-v0 model might constrain the scope of the evaluation. Exploring the applicability of the proposed approach on other model architectures would better demonstrate its generalizability and practical utility. Theoretical Claims: The provided proofs and theoretical claims appear to be correct and internally consistent. No major flaws or inconsistencies were identified. Experimental Designs Or Analyses: - The authors employ the JD-cluster algorithm with $k$ clusters to compress a large number of $n$ LoRAs. However, the study does not provide a detailed analysis of how $k$ varies as $n$ increases. - In addition, the comparative evaluation is limited to the original uncompressed LoRAs and an SVD-based compression approach. ncluding additional baselines would help to strengthen the contributions of the proposed method. Supplementary Material: N/A Relation To Broader Scientific Literature: - The paper addresses an important challenge in deploying large-scale machine learning models, particularly in real-time applications where serving numerous LoRA adapters efficiently is paramount. This concern aligns with a growing body of work focused on model compression and optimization, such as knowledge distillation, network pruning, and factorization-based methods. - The proposed JD method is a novel approach that broadens current model compression strategies. This innovation has the potential to advance both theoretical understanding and practical applications in the domain of efficient model adaptation, further bridging the gap between research and real-world deployment. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - The paper offers practical implementations that enhance the clarity and applicability of the proposed method. - The experimental results are presented in a manner that can be somewhat difficult to follow. Other Comments Or Suggestions: N/A Questions For Authors: 1. Could the authors provide a quantitative or qualitative analysis of the LoRA weight diversity (e.g., through visualizations or statistical measures)? If the weights appear largely similar, it might raise concerns about the method’s capacity for broad generalization; conversely, robust evidence of weight diversity would strengthen the paper’s claims. 2. Could you elaborate on how does number of clusters $k$ vary with the number of LoRA adapters $n$, and how sensitive is the algorithm’s performance to different values of $k$? 3. Could the proposed method be extended or adapted to other parameter-efficient fine-tuning approaches, such as prompt-tuning? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the review. Please see our responses below. --- ## 1. LoRA Adapter Diversity and Task Coverage Please see **Table 3** in the Appendix, which lists all 1000 tasks drawn from *Super-Natural Instructions: Generalization via Declarative Instructions on 1600+ NLP Tasks*. This dataset covers a wide range of tasks, including multiple languages, and provides detailed insight into our data collection protocol. Moreover, the relatively high reconstruction error observed in our experiments indicates that the LoRA adapters are not overly similar. Another indication for the diversity is the low results of TIES merging (Appendix H.3), which might have succeeded otherwise. The significant gains from clustering further suggest a meaningful clustered structure among tasks. In response to your suggestion, we will add an analysis of cosine similarity between LoRA adapters prior to compression. If you have specific similarity thresholds or metrics in mind, we welcome your recommendations. ## 2. Evaluation on Additional Model Architectures We trained an unprecedented number of LoRA adapters on the Mistral-7B-Instruct-v0 model to rigorously validate our approach. While we acknowledge that additional experiments on more architectures could provide further insights, our current evaluation robustly supports our claims, and we hope you consider this work as a meaningful step forward. Still, we aim to extend our evaluation to include results from another model architecture, which further demonstrates the generalizability and practical utility of our method. ## 3. Analysis of Clustering Parameter Sensitivity In **Section G** of the Appendix and Figure 6, we study how the number of clusters \(K\) varies with the number of LoRA adapters \(N\) (with experiments conducted for \(N=100\) and \(N=500\)). This analysis helps clarify the sensitivity of the algorithm’s performance with respect to different values of \(K\). Further, see Section 6.5 about selecting hyperparameters. ## 4. Comparative Evaluation and Extensions to Other Methods In addition to the original uncompressed LoRAs and the SVD-based compression baseline, we compare to Ties-Merging in Appendix H.3 (see Table 7). Regarding potential extensions, we acknowledge that other parameter-efficient fine-tuning methods such as prompt-tuning might also benefit from exploiting shared structures across tasks. We agree that exploring this possibility is an interesting direction for future work. --- We hope that these clarifications and our planned experiments sufficiently address your concerns.
Summary: This paper considers the problem of serving a large amount of LoRA adapters for the same LLM. This is a very practical scenarios where each LoRA adapter correspond to one specific task. If one naively switches between different adapters, the throughput will degrade a lot when the number of adapters is large. So in this paper, the authors propose to compress all LoRA adapters together by finding a share basis for all. The proposed method significantly reduce the serving overhead when the number of adapters is up to 1000. Overall I think the method is very smart and direct. The throughput gain over the naive solution is very signifiant. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: This paper provides useful insights on how to serve a large number of lora adapters in practice. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the novelty of our method and its throughput gains over naive solutions. We emphasize that our joint compression approach effectively addresses the GPU memory constraints and the overhead associated with loading and unloading LoRA adapters and is accompanied by both theoretical and empirical validation. If there is any additional clarification or experiment we can provide to further strengthen your confidence in our approach, please let us know. Otherwise, we would greatly appreciate it if you could consider revising your score upward based on our response. Thank you again for your valuable feedback.
null
null
Generalized additive models via direct optimization of regularized decision stump forests
Accept (poster)
Summary: The authors propose an alternative framework to train GAM models with piecewise constant shape functions based on an alternating optimisation algorithm. A specific regularization strategy is proposed to deal with the issue of overfitting when directly optimizing this type of models. The authors find that this approach is competitive with other boosting based approaches to optimize the same type of models. Claims And Evidence: As far as I am concerned, the authors make no unsupported claims. I think, however, that the authors could clarify that even though their method converges, assuming no cycles, there is no guarantee of convergence to the global optimum solution. Furthermore, there is no guarantee on the rate of this convergence. This type of method could converge very slowly in problems with narrow "diagonal" valleys where alternating minimization makes little progress with each iteration. Methods And Evaluation Criteria: In my opinion, the methods and evaluation make sense for the problem at hand. Theoretical Claims: There are no proofs or theoretical claims. Experimental Designs Or Analyses: The choice of baselines and the experimental setup all seem adequate. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: GAM models that rely on piecewise constant shape functions, such as Explainable Boosting Machines, are arguably SoTA within this model class. This work proposes a new approach to train models using the same type of shape function, through directly optimizing the splits of a stump forest rather than the boosting approach of EBM. Essential References Not Discussed: None that I am aware. Other Strengths And Weaknesses: ### Strengths The authors put a great effort in the presentation of the paper. Everything is clearly explained, with plenty of insightful figures that illustrate the main points. Furthermore, the proposed approach, while somewhat reminiscent of backfitting, is interesting and original, presenting a significant enough departure from how similar piece-wise constant models are typically trained. ### Weaknesses My overall interpretation of the results is that GAM models seem rather limited in their representation capacity given their rigid model structure, and there isn't much to be gained in the choice of different families of shape functions or how they are fit. Particularly, for all of the models learning piecewise constant shape functions, the differences in performance seem small, specially when comparing with black-box models that don't follow the GAM structure. As an example, a standard GBDT (with deeper trees) achieves < 3% error on Covtype. This makes the small differences between GAM models, all of which achieve ~22% error, seem inconsequential in practice. As such, while I value work on interpretable models, I don't see this work as having a particularly large impact, specially given that it seems significantly slower to fit than alternatives. **Post rebuttal note:** Considering the authors' efforts in optimizing their algorithm and the significant speedups they achieved, this pointed weakness is no longer applicable and the argument for adopting the proposed algorithm becomes considerably more compelling. Other Comments Or Suggestions: None Questions For Authors: (**RESOLVED**) 1. As the authors point out as a possible point of improvement, it seems to me that the optimization of the leaf values could be formulated as a standard quadratic program. Perhaps calling a solver directly would forego the overhead of CVXPY and speed up the training process which is one the weaker points of the algorithm. If a substantial speed up could be achieved I would be willing to raise my score to a clear accept. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for providing us with a very valuable and insightful review! We greatly appreciate your effort in reviewing our paper. * **Convergence to global optimum.** The problem of learning an optimal decision stump forest is computationally intractable in general. Even in the case of 1D splines, finding optimal knot positions is known to be NP-hard [1]. Therefore, guaranteeing convergence to a global optimum in polynomial time is unlikely. Our algorithm guarantees a monotonic decrease in the objective function at each iteration and converges to a local optimum. We expand on this discussion in our rebuttal to wGmH. * **Slow rate of convergence.** Our optimization space includes both continuous and discrete components. The continuous part, the leaf values, is optimized exactly by solving a convex problem. The discrete part involves the split parameters of the stumps. Alternating optimization is one of the most effective approaches for such problems. For instance, the well-known k-means algorithm also alternates between discrete and continuous updates and is rarely hindered by slow convergence in practice. In our experiments, the algorithm typically converges within around 5 iterations, where one iteration is defined as a full pass over all stumps and an update of all leaf parameters. * **Limited representation capacity.** We acknowledge and agree with the observation regarding the limited representation capacity of GAMs, and that there may be limited room for improvement over existing baselines. While it is true that on datasets like Covertype, GAMs perform notably worse than high-capacity models, there are also cases, such as the cpuact dataset, where GAMs achieve accuracy comparable to models like XGBoost. The key strength of GAMs lies in their interpretability and their ability to provide meaningful insights into the data. This is particularly valuable in sensitive domains like healthcare, where understanding model behavior is crucial [2]. * **Training time improvement.** The majority of our time during this research project was devoted to analyzing the behavior of the algorithm and exploring various strategies to address the overfitting. Comparatively little time was initially spent on improving runtime performance. During the rebuttal period, we revisited the formulation of the leaf-fitting problem and focused on optimizing its efficiency. Instead of optimizing over the stump leaf parameters, we now directly optimize over the constant piece values of the GAM shape functions. That is, we switch from the stump forest representation to its equivalent GAM form. This change reduces the number of optimization variables by a factor of two. Additionally, by rewriting the convex problem in CVXPY using a different set of atomic functions, we were able to switch the underlying solver to OSQP (Operator Splitting Quadratic Program). In line with your suggestion, this allows us to cast the problem as a standard quadratic program while still using CVXPY to handle the transformation, with minimal overhead. These changes result in a significant improvement in training time, by a factor of 5 to 20 depending on the dataset. After solving for the constant pieces of the GAM, we convert the result back to the stump forest representation. Specifically, let the shape function for feature $d$ be $f_d(x_d)$ which consists of the $T_d+1$ constant pieces with values $\beta_0, \dots, \beta_{T_d+1}$ from left to right. The leftmost stump has leaf values $\mu^l_0 = \beta_0$ and $\mu^r_0 = \beta_1$. For subsequent stumps with index $i > 0$ we set the left leaf to $\mu^l_i = 0$ and the right leaf to $\mu^r_i = \beta_i - \beta_{i-1}$. In other words, the left leaf is always 0, and the right leaf captures the difference between adjacent constant values. It is straightforward to verify that this reconstruction yields a function equivalent to the original GAM. The updated Tables 1 and 2 (https://anonymous.4open.science/r/pdf_to_anon-1CD8/icml25_rebuttal_figures.pdf or https://drive.google.com/file/d/1Xso8hwFd9rB_XqAzH6H8ri7vUbrXEaq6/view?usp=sharing) reflect the significantly improved training times achieved through this new formulation. Now, our method is among the fastest to learning accurate piecewise constant GAMs. * **Warm starting ability.** Another independent strategy to accelerate training and potentially improve accuracy is to initialize the optimization from stronger starting points. During the rebuttal period, we incorporated an additional baseline, FastSparse, which efficiently produces reasonably accurate piecewise constant GAMs. Using these models for initialization led to even faster convergence and slightly improved accuracy on several datasets. [1] G. Beliakov, "Least squares splines with free knots: global optimization approach", in Applied Mathematics and Computation, 2004. [2] R. Caruana et al., "Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission", KDD 2015. --- Rebuttal Comment 1.1: Comment: After carefully reading the remaining reviews and the authors' rebuttals, I have no further serious concerns about the paper. I appreciate the authors' efforts in optimizing their algorithm and achieving significant speed-ups. I also appreciate the comparisons with the baselines requested by reviewer `r73Z`. As such, I am happy to increase my score from 3 to 4. **Minor point:** My comment about convergence to a local optimum was merely that a clarification could be added to the manuscript, in the paragraph starting on line 158, when the authors claim that the algorithm converges. While I agree that it should be obvious, given the nature of the problem, I could see a reader being mislead since the authors don't explicitly mention that this convergence is to a local optimum.
Summary: This paper proposes an alternating optimization method for training an additive model composed of decision stumps. The optimization procedure alternates between two steps: (1) selecting a feature and determining the optimal split value for a decision stump, and (2) jointly optimizing all coefficients through a convex optimization solver. The method incorporates a regularization term to penalize the roughness of the coefficients, which helps control model complexity. The authors claims that their approach effectively reduces the number of stumps while maintaining both training and test performance. ## update after rebuttal I want to thank the authors for replying to my second round rebuttal comments. I have also read the reviews and rebuttal comments by other two reviewers. The mathematics behind individual stump optimization makes sense to me. This should be included in the paper (at least in the Appendix) during revision. The new plot on the FICO dataset (Figure 2) included in the anonymous link also looks reasonable to me. This figure should be included during revision. I think the reason for FastSparse of having a larger support size than I expected is on your Fold 4, somehow the sparsity wasn't optimized around 20 features. This wasn't my experience, but maybe it has something to do with your particular 5CV split or the feature binarization preprocessing step. Since my major concerns are addressed, I no longer have a big problem against this paper getting accepted. I am raising my evaluation to score of 3, above the acceptance threshold. Claims And Evidence: The claims made in the paper lack rigorous support from the experimental results. The authors assert novelty in their approach to generalized additive models, yet prior literature already presents similar methodologies and much more impressive achievements. Specifically, the claim that this is the first paper to introduce such a model is incorrect—relevant prior work should be acknowledged (see references). Furthermore, the reported performance improvements are unconvincing. Based on results in Tables 1 and 2, when considering standard deviations, the proposed method performs comparably to existing approaches in both training and test settings. However, the baseline spline-based method achieves significantly smaller model sizes. This raises concerns about the practical significance of the proposed contribution, as the problem appears to be more effectively addressed by existing methods. Methods And Evaluation Criteria: The proposed alternating optimization method could lead to suboptimal solutions. In each iteration, the approach selects a feature and a corresponding split value while keeping the coefficients fixed. This rigid structure can lead to suboptimal feature and split choices, making the optimization process less refined. While such a method may offer some improvements over existing approaches, it lacks sophistication and does not provide guarantees on solution quality. The evaluation criteria used in the paper align with standard machine learning practices, as they include training and test performance, model size, and training time. However, a critical issue is the absence of strong baseline comparisons. The experiments should include comparisons with state-of-the-art methods for both linear regression and graphical model selection to ensure a fair assessment of the proposed approach’s effectiveness. Theoretical Claims: There is no theoretical claim in this paper. Experimental Designs Or Analyses: The experimental design and analyses are standard practices in the ML community. Supplementary Material: There is no supplementary material submitted. Code is not provided to check reproducibility. Relation To Broader Scientific Literature: This paper is related to the broad research topics of interpretable machine learning, boosting, and decision trees. The key difference, compared with the boosting literature, is that historical stumps are allowed to be optimized over and over again. Essential References Not Discussed: The paper omits several essential references related to generalized additive models and decision tree-based methods. Specifically, the authors claim to be the first to introduce a generalized additive model, but prior work has already developed such models with competitive performance and significantly fewer parameters. The following paper should be cited: 1. Fast Sparse Classification for Generalized Linear and Additive Models by Jiachang Liu, Chudi Zhong, Margo Seltzer, Cynthia Rudin, AISTATS 2022. Additionally, the authors fail to compare their method against state-of-the-art decision tree-based models. The only comparisons provided are with basic tree models, which do not reflect the latest advancements in the field. The following papers present scalable and optimal sparse decision trees that should be considered: 2. Generalized and Scalable Optimal Sparse Decision Trees by Jimmy Lin, Chudi Zhong, Diane Hu, Cynthia Rudin, Margo Seltzer, ICML 2020. 3. Optimal Sparse Regression Trees by Rui Zhang, Rui Xin, Margo Seltzer, Cynthia Rudin, AAAI 2023. These papers demonstrate that classification and regression trees can be constructed with fewer than 30 variables while maintaining performance comparable to much larger tree-based models developed using older methodologies. A thorough comparison with these methods is necessary to properly contextualize the contributions of this paper. Other Strengths And Weaknesses: The writing requires significant improvement. Many sections are overly verbose and lack substantive content. Certain passages could be condensed or moved to the appendix for better readability. For example, from lines 209 to 252, the authors spend nearly a full page deriving the objective function in Equation 5. However, this section reads more like a stream of consciousness than a structured derivation. Instead, the paper should present Equation 5 concisely at the beginning of the section, followed by a brief yet clear explanation. Overall, the clarity and conciseness of the writing should be improved. The exposition is currently too wordy, making it difficult to follow the main contributions efficiently. Refining the explanations and eliminating redundancy would significantly enhance the readability and impact of the paper. Other Comments Or Suggestions: Some experimental details, such as Lines 301–315 (right column), should be moved to the appendix. This section is overly detailed and does not contribute significantly to the main narrative. The main text should focus on the core contributions, while dataset descriptions and minor experimental details can be placed in supplementary material. Questions For Authors: 1. How did you select the hyper parameters $\lambda$ and $\alpha$ in Equation 5, the objective function? To me, hand-picking the values does not seem like a rigorous approach of doing this. 2. Can you run [1, 2, 3], report results on datasets listed in Table 1 and 2, and make a comparison during rebuttal? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for providing an insightful and constructive review! We greatly appreciate your effort in reviewing our paper. * **Claim on novelty.** To be precise, the only place we make a claim of doing something for the first time is with respect to ***learning stump forests with good generalization without ensembling techniques of boosting and bagging***. At no point do we claim "to be the first to introduce a generalized additive model". Regarding reference [1], we acknowledge it as an important and relevant contribution. However, the main focus of [1] is not about decision stumps/trees but about learning sparse linear classifiers efficiently. The GAMs in [1] are introduced by learning a linear model on transformed features, where continuous variables are replaced with a set of binary indicators based on thresholds. We will include [1] in our literature review. We have already conducted empirical comparisons with [1] and updated our results table accordingly (see below). * **Unconvincing improvements compared to splines.** We acknowledge the concern regarding the magnitude of performance improvements. However, we would like to emphasize that GAMs, constrained to be sums of univariate functions, are inherently limited in their expressive power. Consequently, even an oracle, i.e., a globally optimal GAM, may yield only modest gains in predictive performance. Expecting substantial improvements over existing GAM variants may therefore be unrealistic given the inherent limitations of the model class. On regression datasets, spline-based methods consistently underperform compared to stump- or tree-based approaches. As noted by Reviewer wGmH, a key advantage of piecewise constant GAMs over smooth methods like cubic splines is their ability to capture discontinuous shape functions and abrupt changes, which is an important characteristic for modeling tabular data. Another limitation of traditional splines lies in selecting the placement and number of knots (i.e., points where two spline segments meet). For example, PyGAM places the same number of knots for each feature, whereas in our method, the distribution of stumps is determined dynamically during optimization. * **Suboptimal solutions of alternating optimization.** To clarify, our algorithm performs *exact* optimization over the four parameters of one stump at a time: the feature index, threshold value, and the left and right leaf predictions. After a full pass through all stumps, all leaf predictions are globally optimized given the current split parameters. While the method may appear simple and may be perceived as lacking sophistication, it is in fact highly effective at minimizing error. For instance, as shown in the left plot of Figure 1, just 200 optimized stumps achieve the same error level as 1000 greedily added stumps. * **No theoretical claims or guarantees on solution quality.** As we elaborate in our rebuttal to reviewer wGmH, our algorithm has important theoretical properties: monotonic decrease of the objective at each iteration and convergence to a local optimum. * **Absence of strong baseline comparisons.** Our method operates within the class of GAMs, and as such our experimental comparisons include all the state-of-the-art methods (e.g. EBMs) within this model class. Other reviewers have acknowledged that the chosen baselines are appropriate and sufficient for evaluating our contributions within this context. In this rebuttal, we have also included an additional comparison with FastSparse. * **Additional comparisons.** We have run the methods from [1, 2, 3], and reported the results in Tables 1–4 of the rebuttal PDF: https://anonymous.4open.science/r/pdf_to_anon-1CD8/icml25_rebuttal_figures.pdf or https://drive.google.com/file/d/1Xso8hwFd9rB_XqAzH6H8ri7vUbrXEaq6/view?usp=sharing. Full experimental details are provided in the table captions. [1] generally delivers strong performance across many datasets while maintaining small model sizes and fast training times. We can also leverage the solution from [1] to warm-start our stump forests and achieve even more accurate results (see more in our rebuttal to owkV). In contrast, [2] and [3] were significantly slower to train; we had to apply subsampling and feature binarization to obtain usable results. While these tree-based models are interpretable, direct comparisons with them are outside the scope of our work, which focuses specifically on GAMs. * **Writing improvement.** Thank you for the helpful suggestions regarding the writing. We will carefully incorporate these recommendations to improve clarity, conciseness, and overall readability in the final version of the paper. * **Selection of hyperparameters.** As we state in line numbers 555 to 563 of appendix, we perform grid-search over $\lambda = \{2.0, 4.0, 6.0\}$ for classification datasets, $\lambda = \{20.0, 40.0, 60.0\}$ for regression datasets. We use a fixed value of $\alpha = 0.1$, as it plays less significant role. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. I appreciate your running the additional experiments. I have several followup comments/questions. 1. When you optimize each individual stump, are you optimizing $\mu_l$ and $\mu_r$ as well or just optimizing over $\phi$ and $\tau$? If you are also optimizing over $\mu_l$ and $\mu_r$, how do you do that? 2. Thanks for running the FastSparse baseline. However, I find it hard to believe the reported results. Are you sure you are running the baseline correctly? If you look at Figure 8 in their AISTSTS paper, the authors were able to achieve prediction accuracy ~ 0.73 with only 15-20 coefficients. But in your rebuttal results, FastSparse uses 196 coefficients. The FastSparse paper also shows similar level of extreme sparsity it can achieve on GAMs for other two practical datasets. To me, FastSparse is either used or reported incorrectly in the rebuttal results. Yes, I know my criticism is much harsher compared to other two reviewers, but I work a lot on $\ell_0$ optimization, and I am kind of sure $\ell_0$ optimization can produce much sparser models than $ell_1$ optimization or the results shown in this paper. Thus, I find it hard to reconcile this sentence "Our regularized stump forests achieve accuracy comparable to state-of-the-art GAM methods while using fewer parameters" stated in the abstract. The reason is that there are already $\ell_0$ optimization methods that can achieve much much sparser GAM models. If you really want to claim you can achieve much sparser models, I recommend you do some plotting similar to Figure 5 and Figure 8 shown in the FastSparse AISTATS paper. Specifically, plot accuracy (or AUC or squared error) vs sparsity level on both the training and test sets. 3. For the two decision tree baselines, when you report "out of running time", do these baselines not even return any tree at all? Usually for methods based on branch-and-bound (BnB), if they exceed the running time and optimality is not certified, many BnBs just return the best heuristic solution found so far. So in principle you can at least use the best heuristic model to do evaluation on the training and test sets. --- Reply to Comment 1.1.1: Comment: * We enumerate all possible split candidates $(\phi, \tau)$. For each potential split, the optimization over $\mu_l$ and $\mu_r$ is strongly convex and can be solved exactly. In the regression setting (we show the case where $\alpha = 0$ due to space constraints), the problem is: $\min_{\mu_l, \mu_r} \frac{1}{2} \sum_{n \in l} (y_n - \mu_l)^2 + \frac{1}{2} \sum_{n \in r} (y_n - \mu_r)^2 + \lambda \ |\mu_l - \mu_r|$. Let $\bar{y}_l$ and $\bar{y}_r$ denote the sample means of the left and right partitions, and let $n_l$ and $n_r$ be the number of points in each. The optimal values of $\mu_l, \mu_r$ admit a closed form solution: $$ \mu_l = \bar{y}_l - \frac{\lambda}{n_l}, \ \mu_r = \bar{y}_r + \frac{\lambda}{n_r} \quad \text{if } \bar{y}_l - \bar{y}_r > \lambda (\frac{1}{n_l} + \frac{1}{n_l}) $$ $$ \mu_l = \bar{y}_l + \frac{\lambda}{n_l}, \ \mu_r = \bar{y}_r - \frac{\lambda}{n_r} \quad \text{if } \bar{y}_l - \bar{y}_r < -\lambda (\frac{1}{n_l} + \frac{1}{n_l}) $$ $$ \mu_l = \mu_r = \frac{n_l \bar{y}_l + n_r \bar{y}_r}{n_l + n_r} \quad \text{otherwise}. $$ By maintaining cumulative statistics when scanning over split points, we can evaluate the optimal $\mu_l, \mu_r$ and the corresponding objective for each split in constant time. This gives an overall linear-time procedure, assuming the features are pre-sorted. For classification with cross-entropy loss, the optimal values of $\mu_l, \mu_r$ do not admit closed-form solutions. Instead, we approximate them using a single Newton step, an established technique used in XGBoost and in the iteratively reweighted least squares (IRLS) method for traditional GAM backfitting. Given that our framework revisits stumps at each iteration and jointly optimizes all leaves, these Newton updates are both computationally efficient and well-aligned with the structure of our iterative optimization procedure. * In our experiments, the reported model size for piecewise constant GAMs is defined as the number of constant segments *multiplied by 2*, to account for both the split threshold and the constant value. For all baselines, we follow the same protocol: hyperparameters are selected using grid search on a validation set, and final performance is reported as the average over five independent train/test splits using the best-found parameters. For FastSparse we use the built-in num_lambda=100 setting to search for the optimal value of $\lambda$ during hyperparameter tuning. One implementation detail is that the FastSparse code does not provide functionality for learning a feature binarization scheme from the training data and applying it to the test data. Based on their paper and available code, it seems that feature binarization may have been performed on the entire dataset prior to model training, which can lead to information leakage from test to training data. In contrast, we ensure that binarization thresholds are obtained strictly from the training set and then applied to the test set. We observe inconsistency in FastSparse’s behavior depending on whether a given $\lambda$ model is part of a regularization path. For example, training a single model with $\lambda = 0.63$ (by specifying the lambda_grid argument with a single value) results in 102 nnz coefficients, while the same $\lambda$ within a regularization path produces 71 nnz coefficients, despite all other parameters being equal. This sensitivity does not seem to be documented in the FastSparse paper or codebase. Because FastSparse seems to favor the use of regularization paths, we compare its full path to that of our method (see below). We fully agree that $\ell_0$-based optimization is a more effective approach for obtaining sparse models compared to $\ell_1$ regularization. This is, in fact, **a key reason why our method produces compact models: we explicitly limit the number of stumps from the outset, effectively imposing an $\ell_0$-type constraint**, and then optimize over this restricted stump budget. The $\ell_1$-roughness penalty in our approach is primarily intended to control overfitting. We do not set its value high enough to enforce sparsity; rather, we rely on it for its shrinkage effect. We perform experiments comparing full regularization paths, and produce the suggested figs. 1 and 2 in <https://anonymous.4open.science/r/pdf_to_anon-1CD8/icml25_rebuttal_figures_2.pdf> or <https://drive.google.com/file/d/1WZpBhmgvDD0QYV5-X-1aKuchCgRa_TdK/view?usp=sharing>. The figure captions provide descriptions of the experiments. On the FICO dataset, our method performs comparably to FastSparse. On the cpuact dataset, our approach achieves higher accuracy, which we attribute to the regularizing effect of the $\ell_1$ roughness penalty. * When the time limit is reached, the GOSDT implementation terminates with an exception: "Error: GOSDT encountered an error while training". The OSRT implementation produces some models if it exceeds the time limit, and we have updated our results table with it.
Summary: The paper introduces a method to fit decision tree stumps per feature, which can be interpreted as a generalized additive model. To mitigate overfitting, the authors propose a smoothness constraint that is optimized jointly with the stump parameters. Unlike approaches such as EBMs that greedily add decision tree stumps, this method requires fewer stumps because it globally optimizes stump placement according to the data. Consequently, the method avoids the need for ensemble strategies like bagging and boosting. The experiments demonstrate comparable performance to Explainable Boosting Machines and other baselines, but using a more straightforward fitting procedure. Claims And Evidence: Yes, all claims of the paper are well supported. Methods And Evaluation Criteria: Yes, all considered benchmarks are classic benchmarks to consider in the GAM literature. Theoretical Claims: - Experimental Designs Or Analyses: The experimental design seems solid. The hyperparameters of each baseline are optimized using a grid search over several hyperparameters. ORSF performs well throughout Supplementary Material: I have read the supplementary material which discusses the hyperparameters, baselines and datasets used. Relation To Broader Scientific Literature: This paper proposes a new method of fitting decision tree stumps, which is not a novel model class to be used in GAMs as they are also used prominently in EBMs. However, the fitting procedure discussed by the authors is novel as far as I know. It's intuitive as it globally optimizes the loss function. Essential References Not Discussed: I am not aware of any missed references. Other Strengths And Weaknesses: ### Positive: - The framing of learning the decision tree stumps using a global optimization approach rather than a greedy method is well motivated. The problem with overfitting is illustrated well and the proposed regularizer is intuitive to understand. - Unlike Neural Additive Models the proposed approach can easily model discontinuities in the shape functions like EBMs which NAMs struggle to. I think the authors could emphasize this even more because discontinuities can often occur in tabular data and the GAM should be able to represent this. - The paper is well written and easy to understand. ### Weaknesses: - Could you provide a theoretical justification or discussion about whether the proposed alternating optimization approach can reliably reach a global optimum? - Could you also show the shape functions of the other methods for comparison? - You might consider incorporating pairwise interactions by applying the same strategy used by EBMs to identify dominant pairwise effects (Lou et al., 2013). Using similar pairwise effects would allow an additional comparison point with EBMs. ### References: - Lou, Y., Caruana, R., Gehrke, J., & Hooker, G. (2013, August). Accurate intelligible models with pairwise interactions. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 623-631). Other Comments Or Suggestions: - Questions For Authors: My understanding is that your method implicitly places more stumps in regions with higher data density, which is typically the case. However, Figure 5 (feature "year") shows a surprisingly uniform distribution of thresholds. Could you clarify why this occurs? ## update after rebuttal - The authors addressed all of my questions and looking at the other two reviews I have no further questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for providing us with a very valuable and insightful review! We greatly appreciate your effort in reviewing our paper. * **Modeling discontinuities.** You are absolutely right that NAMs (as well as traditional cubic splines) are limited in their ability to capture sharp discontinuities in shape functions. This is a significant limitation in many real-world tabular datasets, where such discontinuities are common due to thresholds, categorical splits, or other sharp nonlinearities in the data. One of the key advantages of the stump-based shape functions, similar to those used in EBMs, is their natural ability to represent abrupt changes locally in the function without inducing unnecessary smoothness. This is also a likely reason why NAMs tend to underperform in our experiments on tabular datasets. We really appreciate your pointing this out, we will incorporate this discussion more explicitly in the introduction to better highlight this advantage of our method. * **Reaching a global optimum** Although we do not formally prove this result, and cannot easily find a definitive reference establishing it, the problem of learning an optimal decision stump forest appears to be computationally intractable in general. Intuitively, it resembles the well-known best subset selection problem, which is NP-hard. Each decision stump is specified by a feature index and a threshold. However, selecting the best set of $T$ stumps entails choosing an optimal subset of $T$ feature-threshold pairs from a combinatorially large candidate space. For a dataset with $N$ samples and $D$ features, there are $O(ND)$ such candidates, leading to a total search space of size $\binom{ND}{T}$. This exponential growth in possibilities suggests that the overall problem may be NP-hard, implying that finding a globally optimal solution in polynomial time is unlikely in the general case. * **Theoretical properties.** Nevertheless, our proposed alternating optimization algorithm has important theoretical properties. Specifically, each optimization step is guaranteed to monotonically decrease the overall objective, and the algorithm converges to a local optimum where no single alternating optimization step can further improve the solution (quite similar to the k-means clustering algorithm). The traditional ensembling methods of boosting and bagging do not provide these type of theoretical guarantees, and lack a well-defined global objective, unlike our more principled optimization framework. The ability of our approach to take an initial stump forest, and optimize it further has the additional advantage of warm-starting behavior: the model can be initialized from an existing forest, such as one derived from any piecewise constant GAM (more on this in our rebuttal to reviewer owkV), and further refined. This also allows for efficient model updates when new training data becomes available (more on this in our rebuttal to reviewer owkV). * **Shape functions of the other methods.** In https://anonymous.4open.science/r/pdf_to_anon-1CD8/icml25_rebuttal_figures.pdf or https://drive.google.com/file/d/1Xso8hwFd9rB_XqAzH6H8ri7vUbrXEaq6/view?usp=sharing, in figs. 2 and 3, we provide the shape functions for EBM and PyGAM. Overall, the general trend in EBM shapes is quite similar to the ones obtained by our method and also being similar in accuracy (around 2,200 test RMSE), although it generated an overly noisy curve for the mileage feature. PyGAM, on the other hand, generates very smooth curves, and behaves quite unpredictably in regions with less or no data. * **Pairwise interactions.** (Lou et al., 2013) uses a two-step approach for building GA$^2$M: first, building univariate shapes, then modeling and including pairwise interactions on the residuals using a variation of depth-2 decision trees. Indeed, we can directly apply a similar strategy, and extend our method to model pairwise interactions. But we believe that it is possible to extend the proposed alternating optimization method to learn an ensemble of both decision stumps and depth-2 trees. Naive implementation of this idea could result in overfitting, and the regularization terms of our current approach is not directly applicable to depth-2 trees. Defining effective regularization techniques for these more complex tree ensembles can be an important future research direction. * **Distribution of thresholds on the "year" feature.** The "year" feature is highly discretized, with only 22 unique values ranging from 1999 to 2020. In the higher-density range (2013–2020), nearly all possible thresholds are utilized. In contrast, some thresholds are skipped in lower-density regions—for example, 2005 and 2008 are not used. While decision stumps may appear to concentrate in regions with more data, their distribution is ultimately determined by the optimization process rather than simply by feature density.
null
null
null
null
null
null
null
null
Efficient LiDAR Reflectance Compression via Scanning Serialization
Accept (poster)
Summary: This paper proposes a reflectance compression method based on serialized LiDAR data. Specifically, the method first converts 3D LiDAR point clouds into 1D sequences through scanning order serialization, where each point is labeled with a context representation that includes the sensor scan index, radial distance, and previous reflectance values, thereby establishing dependencies within the sequence. To achieve efficient sequence modeling, the paper combines Mamba with a two-level parallelization scheme to accelerate the autoregressive processing speed. Experimental results show that this method outperforms the latest GPCC and deep learning-based Unicorn in terms of model size, encoding/decoding time, and compression performance. Additionally, the authors propose a pipelined compression strategy, enabling the encoding speed to reach thirty frames per second, which has practical significance for real-world applications. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There are no apparent errors in the theoretical proof. Experimental Designs Or Analyses: There are no significant issues with the experimental design and analysis. Supplementary Material: I have reviewed the supplementary materials regarding the ablation experiments and some explanations of the experimental details. Relation To Broader Scientific Literature: I believe the main contribution of this paper is the proposal that the reflectance in different sequences of LiDAR may not be correlated. The authors have demonstrated this point through ablation experiments, which provides guidance for the parallelization of future LiDAR reflectance compression. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: (1) The paper points out that the correlation of reflectance between different sequences may not be high, allowing for the parallel processing of point clouds from different sequences. This is a significant finding, as current research in both image compression and point cloud attribute compression has focused heavily on the similarity of local information (e.g., studies based on context-based entropy models). The discovery of the unique distribution pattern of LiDAR reflectance in point clouds provides substantial guidance for future research, particularly in real-time data encoding and decoding. (2) The paper demonstrates potential for real-time encoding and decoding. The method employs parallelization between different sequences and within multiple windows of a single sequence, referencing the pipelining concept from computer architecture, transforming the encoding process into a three-stage pipeline, achieving a decoding speed of thirty frames per second. (3) The paper includes comprehensive ablation experiments: (a) it compares the decoding time, model size, and runtime memory usage of the core module Mamba with that of Transformer; (b) the ablation experiments regarding the correlation of reflectance between different sequences are particularly convincing, as they visually illustrate the differences between sequences and experimentally demonstrate that referencing more sequences does not lead to performance improvements. (4) The method proposed in this paper is specifically designed for LiDAR point clouds. Unlike the L3C2 method, which requires specific sensor parameter information, this method is more applicable to various datasets, needing only the most basic geometric information (xyz) to function. Overall, the proposed method exhibits excellent compression performance and encoding/decoding speed. The writing of the paper is also quite reasonable, supported by sufficient experimental evidence to substantiate the authors' claims. Weaknesses: (1) The proposed reflectance is significant for downstream detection tasks. The experimental results indicate that the accuracy for bicycles and pedestrians drops to less than half. I would like to know whether the authors retrained the PointPillar model under the same settings without considering reflectance during this experiment. Additionally, I hope to see more comprehensive comparative results, such as detection accuracies for cars, pedestrians, and cyclists at different AP levels. Since the impact of reflectance on downstream tasks is one of the core points of this paper, I would like this section of the comparative experiments to be clearer and more thorough. (2) In the process of data serialization, according to the formula provided by the authors in eq(4), there is an issue. When quantizing based on the elevation angle, due to noise during the sampling of LiDAR point clouds, it is inevitable that some different points will be mapped to the same (u, v) coordinates during the actual conversion process, which undoubtedly leads to a loss of precision. However, it seems that the paper claims to achieve lossless reflectance compression (as the authors do not provide RD curves in the subsequent results, only bpp and bpp-gain). I am concerned about the correctness of this part when compared to other results and hope the authors can provide a detailed explanation regarding this issue. (3) The paper does not conduct experiments at multiple bitrate points, which raises some confusion. Other methods, such as G-PCC and Unicorn, provide results at multiple bitrate points. When comparing with these methods, did the authors set them all to lossless mode? Similarly, why did the authors not conduct tests at multiple bitrate points? Even with lossless compression, different quantizations of attributes can achieve various bitrate points. Other Comments Or Suggestions: None Questions For Authors: Please see the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful comments and constructive suggestions. Thank you for recognizing this paper *provides substantial guidance for future research* with *reasonable writing* and *sufficient experimental evidence*. Below, we provide detailed responses in the hope of addressing your concerns. **1. Clearer Comparative Experiments for Downstream Detection Task** 1) The visualizations and quantitative results in our manuscript show that simply removing reflectance data (setting values to 0) in pre-trained models leads to significant performance degradation. As suggested, expanded comparisons now include SECOND and PointRCNN (see table below), showing detection accuracies for cars/pedestrians/cyclists at multiple AP levels ("w/o R" = reflectance set to 0). ||Car Easy|Car Mod.|Car Hard|Ped. Easy|Ped. Mod.|Ped. Hard|Cyc. Easy|Cyc. Mod.|Cyc. Hard|**mAP Mod.**| |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |PointPillar |87.75| 78.40| 75.18| 57.30| 51.41| 46.87| 81.57| 62.81| 58.83| **64.21**| |PointPillar (w/o R) |83.85| 74.21| 70.36| 21.56| 14.08| 12.83| 47.79| 34.28| 32.13| **40.86**| |SECOND |90.55| 81.61| 78.61| 55.95| 51.15| 46.17| 82.97| 66.74| 62.78| **66.50**| |SECOND (w/o R) |87.87| 78.92| 75.58| 40.46| 35.66| 31.88| 70.42| 50.64| 47.70| **55.07**| |PointRCNN |91.47| 80.54| 78.05| 62.96| 55.04| 48.56| 89.17| 70.89| 65.64| **68.82**| |PointRCNN (w/o R)|88.03| 77.00| 72.85| 42.90| 35.28| 30.68| 56.58| 42.51| 40.07| **51.60**| 2) We retrained the detection models from scratch without reflectance. The table below shows the performance of reflectance-ablated (*) models, where performance degradation is reduced but persists (and is non-trivial). ||Car Easy|Car Mod.|Car Hard|Ped. Easy|Ped. Mod.|Ped. Hard|Cyc. Easy|Cyc. Mod.|Cyc. Hard|**mAP Mod.**| |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |PointPillar* | 87.86| 78.39| 75.71| 59.28| 52.48| 48.04| 76.18| 52.96| 50.17| **61.28**| |SECOND* | 88.70| 79.47| 77.86| 57.52| 52.79| 48.38| 78.72| 56.92| 53.26| **63.06**| |PointRCNN* | 89.46| 79.84| 77.58| 64.94| 54.62| 47.66| 89.16| 65.54| 60.92| **66.67**| 3) To more adequately reflect the role of reflectance, we revised our original wording from "It is crucial in downstream tasks..." to "It is widely used in downstream tasks...". All results will be detailed in the revised manuscript and supplementary. Finally, we respectfully emphasize that the core contribution of this work lies in the development of a high-performance real-time reflectance compressor. The extensive utilization of reflectance in downstream tasks provides substantial validation for the significance of our work. **2. Lossless Reflectance Compression vs. Quantization-Induced Precision Loss** We respectfully clarify that all comparisons were conducted in lossless mode, consistent with other lossless compression works (such as G-PCC and Unicorn). Although distinct coordinates may project to identical (u,v) pairs, these pairs only serve as auxiliary priors. The original 3D coordinates are preserved for each point through decoding the geometry bitstream (which aligns with G-PCC’s separate lossless geometry/attribute pipeline). Therefore, even though points may be mapped to the same (u,v) pair, they remain distinct points in 3D space, and we can clearly differentiate them based on their geomtry coordinates. Below we show an example. Consider two points in a point cloud, $p_1$=(3.12, 8.65, 0.50, 80) and $p_2$=(3.13, 8.63, 0.50, 82), where each point is represented as ($x$, $y$, $z$, $reflectance$). After coordinate transformation (in Eq. 3) and quantization (in Eq. 4), both points are mapped to the same (u,v) pair (712, 58). However, this spatial projection does not compromise the integrity of reflectance values. Specifically: - The encoder utilizes statistical patterns from the (712, 58) pair to optimize entropy coding for both values (80 and 82). - The decoder reconstructs reflectance values (80 and 82) by resolving the entropy-coded symbols using the same (u,v)-derived priors and the lossless geometry data. In short, SerLiC uses (u,v) as contextual priors for entropy coding while retaining raw reflectances. Please refer to the Python-style serialization code in our response to reviewer #Kwag for more details. **3. Additional Explanation on Bit Rate Points and Lossy Compression** Although different quantizations can adjust bitrates in lossless models, this approach differs fundamentally from lossy compression. We tested the quantization-driven lossy compression (suggested by the reviewer) against G-PCC, with RD curves available in [this anonymous website](https://anonymous.4open.science/r/11079-88B1/readme.md). It is observed that although SerLiC is designed for lossless compression, it remarkably outperforms lossy methods in the high bitrate range. Moreover, SerLiC operates in real-time, much faster than these lossy methods. Extending SerLiC to lossy compression is our future work.
Summary: The paper presents SerLiC, a novel serialization-based neural compression framework specifically designed for LiDAR reflectance data. The main contributions and findings of the study include: 1. Serialization of LiDAR Data: SerLiC transforms 3D LiDAR point clouds into 1D sequences through scan-order serialization, aligning with the LiDAR scanning mechanism. This approach allows for more efficient modeling and analysis of reflectance attributes. 2. Contextual Tokenization: Each LiDAR point is tokenized into a contextual representation that incorporates its sensor scanning index, radial distance, and prior reflectance values. This enhances the framework's ability to explore dependencies effectively. 3. Efficient Sequential Modeling: The framework utilizes the Mamba model, which operates with a dual parallelization scheme, facilitating simultaneous autoregressive dependency capture while ensuring fast processing. 4. Performance Improvements: SerLiC achieves over 2× volume reduction compared to the original reflectance data, with up to a 22% reduction in compressed bits relative to existing state-of-the-art methods like Unicorn. It operates with only 2% of the parameters used in these methods, making it highly efficient. 5. Real-world Applicability: A lightweight version of SerLiC, with 111K parameters, achieves processing speeds of ≥ 10 frames per second, making it practical for real-world applications such as autonomous driving and urban planning. 6. Experimental Results: Extensive tests demonstrate that SerLiC consistently outperforms existing compression methods (including G-PCC and Unicorn) across widely used datasets like KITTI, Ford, and nuScenes, reinforcing its effectiveness for LiDAR reflectance compression. Overall, the paper highlights the need for specialized codecs tailored to LiDAR technology and presents SerLiC as a robust solution to meet this requirement. Claims And Evidence: (1) The paper introduces serialization to leverage the sequential nature of LiDAR scans, enabling more efficient modeling and processing. This approach is both logical and well-motivated. (2) The paper thoroughly explores the intrinsic characteristics of LiDAR reflectance. Experiments demonstrate that removing reflectance leads to a significant decline in pedestrian and cyclist detection accuracy when using the widely adopted PointPillar model (Lang et al., 2019). Specifically, on the KITTI dataset, the average precision (AP) drops sharply from 51.4 (62.8) to 14.1 (34.3) for pedestrians (and cyclists), as shown in Figure 1, rendering it impractical for practical applications. Methods And Evaluation Criteria: The proposed method is both logical and well-motivated. (1) Specifically, it introduces a lossless compression technique tailored for LiDAR point cloud reflectance data. Unlike previous approaches, which predominantly concentrate on general-purpose attributes such as color intensities or spatial coordinates, this method addresses a critical gap by focusing on the unique properties of LiDAR reflectance. General-purpose compression techniques, while versatile, often fail to fully exploit the specialized characteristics of reflectance data, such as its intensity distribution and correlation with surface materials. This limitation reduces their effectiveness in scenarios where reflectance plays a pivotal role, such as in autonomous driving, robotics, or environmental mapping. By prioritizing reflectance preservation without data loss, the proposed method offers a more efficient and targeted solution, enhancing the utility of LiDAR point clouds in these specialized applications. The evaluation criteria are both reasonable and well-designed. (1) The testing conditions adhere strictly to the Common Test Conditions (CTC) outlined by the MPEG AI-based Point Cloud Compression (AI-PCC) framework. This standardized approach ensures consistency and comparability with existing benchmarks in the field of point cloud compression. The CTC provides a rigorous set of guidelines, including predefined datasets, compression ratios, and performance metrics such as bitrate and reconstruction quality, allowing for an objective assessment of the method’s effectiveness. By aligning with these conditions, the evaluation not only validates the method’s performance under controlled and reproducible settings but also demonstrates its potential applicability within the broader context of AI-driven point cloud processing standards. This adherence strengthens the credibility of the results and facilitates future comparisons with other state-of-the-art techniques. Theoretical Claims: There isn’t too much theoretical proof. Experimental Designs Or Analyses: Yes, ablation experiment. The ablation studies in the paper are well-designed, systematically evaluating the impact of contextual construction, window-based parallelization, the Mamba network, and attention mechanisms on the performance and computational complexity of the SerLiC model, thereby thoroughly validating the effectiveness of each component. In the contextual construction experiments, by individually disabling components such as scanning index, radial distance, and prior reflectance, the study clearly demonstrates their critical role in capturing LiDAR reflectance correlations. For instance, a noticeable performance drop occurs when the scanning index and radial distance are disabled, aligning with the physical characteristics of LiDAR scanning. The window-based parallelization experiments, by adjusting window sizes (e.g., from 64 to 1024), reveal a trade-off between performance and computational resources (e.g., encoding time and memory usage), identifying 128 as the optimal balance point, which reflects consideration for real-world deployment needs. The Mamba network experiments, varying the number of layers (1 to 7) and network dimensions (64 to 512), show that increased model capacity enhances performance but also raises complexity, a finding consistent with expectations and providing a basis for resource optimization. The attention mechanism experiments, by replacing the Mamba module, compare the performance and complexity of both architectures, proving that Mamba significantly reduces computational overhead (e.g., decoding time drops from 5.12 seconds to 0.23 seconds at a window size of 128) while maintaining comparable performance. Overall, these ablation studies comprehensively cover the model’s core components, employing a controlled variable approach to deliver clear causal insights. They not only confirm the rationality of the SerLiC design but also offer valuable guidance for practical applications in LiDAR reflectance compression. Future work could further explore cross-dataset generalization and the scalability of lossy compression to enhance the method’s applicability. Supplementary Material: No. Relation To Broader Scientific Literature: The paper "Efficient LiDAR Reflectance Compression via Scanning Serialization" introduces SerLiC, advancing LiDAR point cloud compression by focusing on reflectance data, an underexplored area in prior work. Its key contributions—serializing point clouds by scanning order, using Transformer-based sequence modeling, implementing window-based parallelization, and integrating the Mamba network—build on and extend the scientific literature. Traditional methods, like those in Zhu et al. (2018) and de Queiroz and Chou (2016), prioritize spatial coordinates with general-purpose encoding, often neglecting reflectance-specific traits. SerLiC’s serialization leverages LiDAR’s sequential nature, akin to raster scans in image processing, enabling novel sequence modeling previously underutilized in this domain. The Transformer approach adapts established NLP techniques (e.g., Vaswani et al., 2017) to LiDAR data, while window-based parallelization ensures practical efficiency, a common deep learning practice. The Mamba network, a recent innovation (Gu & Dao, 2023), offers linear complexity, potentially marking its debut in LiDAR compression and outperforming prior methods like Unicorn (Wang et al., 2025) by achieving over 2x volume reduction and 22% bit rate improvement with minimal parameters. These advancements enhance compression efficiency and real-time applicability, contributing significantly to autonomous driving and 3D mapping research. Future work could explore broader generalization and lossy compression extensions. Essential References Not Discussed: No. Other Strengths And Weaknesses: The concept of introducing serialization to leverage the sequential nature of LiDAR scans is insightful, enabling efficient modeling and processing. By transforming 3D LiDAR data into 1D sequences, it exploits inherent correlations, proving practical for real-world applications like autonomous driving. The writing is clear and accessible, effectively conveying complex ideas with concise explanations and figures, making it easy for readers, including non-specialists, to understand the methodology and its significance in LiDAR compression. Other Comments Or Suggestions: Discuss Limitations: The paper should provide a deeper analysis of SerLiC’s limitations to enhance its credibility. For instance, its adaptability to non-rotational LiDAR systems, such as solid-state sensors with different scanning patterns, remains unaddressed. Additionally, the method’s performance under highly variable point cloud densities—common in complex environments like dense urban areas or sparse rural settings—needs exploration. These factors could affect serialization efficacy and compression quality. Discussing such constraints would clarify the method’s practical scope, guide future improvements, and help readers assess its applicability across diverse real-world scenarios. Questions For Authors: 【For Serialization Process:】 Could the authors provide a more detailed description or pseudocode for the serialization process to clarify how the LiDAR scanning order is transformed into a one-dimensional sequence? A step-by-step explanation or pseudocode would help readers understand the exact mechanism behind converting the inherently spatial and temporal LiDAR scanning order into a 1D sequence, making the process more transparent and reproducible. How does the serialization process specifically preserve the spatio-temporal correlations of LiDAR data? Please illustrate with a concrete example. LiDAR data contains critical spatial and temporal relationships, and it’s unclear how these are maintained during serialization. A specific example—perhaps showing how a sequence of LiDAR points from a moving object retains its spatial continuity and temporal order—would make this preservation mechanism more tangible and convincing. 【For Comparison of Mamba Network and Transformer:】 Could the authors explain why the Mamba network is better suited for processing LiDAR reflectance data compared to Transformers? Please specify which characteristics of LiDAR data make Mamba more advantageous. While both architectures are designed for sequence modeling, the authors should clarify why Mamba outperforms Transformers in this context. For instance, is it due to LiDAR data’s long-range dependencies, high sparsity, or variable sequence lengths? Highlighting these data-specific traits would strengthen the argument for choosing Mamba. Are there experimental results or theoretical foundations that support the superiority of Mamba in long-sequence modeling? Please provide relevant data or literature references. Claims of Mamba’s advantages need substantiation. Including experimental evidence (e.g., performance metrics like accuracy or efficiency on LiDAR datasets) or citing theoretical studies (e.g., prior work on Mamba’s efficiency in long-sequence tasks) would bolster the credibility of this comparison and provide a solid foundation for the authors’ conclusions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **1. Deeper Analysis of SerLiC’s Limitations** We appreciate your insightful observations and are pleased to provide a comprehensive response below. 1.1 Adaptability to Non-Rotational LiDAR Systems. Non-rotational LiDAR's scanning also follows a specific order, and SerLiC maintains strong compatibility. To validate this, we have conducted additional experiments using the non-rotational InnovizQC dataset from MPEG. The dataset visualization is available in [this anonymous website](https://anonymous.4open.science/r/11079-88B1/readme.md). The compression results presented below demonstrate SerLiC's robust generalizability for non-rotational LiDAR systems. || RAHT | Predlift | SerLiC | |-:|:-:|:-:|:-:| | InnovizQC 02| 3.71 | 3.10 | 2.28 | | InnovizQC 03| 3.82 | 3.17 | 2.25 | | **Average** | **3.77** | **3.14** | **2.27** | 1.2 Performance Under Highly Variable Point Cloud Densities. We provide detailed performance across distinct scenarios in the table below. Results show that SerLiC consistently outperforms G-PCC in all scenes. Relatively, SerLiC performs better in City and Campus scenes than in Residential and Road scenes. This occurs due to dense roadside vegetation interfering with sensor-recorded reflectance characteristics and increasing compression difficulty. We will add these results to the supplementary to demonstrate the method's applicability across diverse real-world scenarios. || RAHT | Predlift | Ours | |-:|:-:|:-:|:-:| | City | 4.67 | 4.55 | 3.24 | | Residential | 4.96 | 4.94 | 3.81 | | Road | 5.41 | 5.36 | 4.39 | | Campus | 4.86 | 4.73 | 3.43 | **2. Pseudocode for the Serialization Process** The following Python-style code demonstrates the serialization of LiDAR point clouds. The function takes a point cloud of shape (N, 4) as input and outputs a list of point sequences, where each sequence corresponds to a specific laser emitter and represents an ordered point set scanned by that laser. ```python def scan_order_serialization(points, L=64, W=1024, pitch_up=3.0, pitch_down=-24.0): """ Params: points (torch.Tensor): Input point cloud of shape (N, 4), with columns (x, y, z, reflectance); L (int): Number of lasers (vertical resolution); W (int): Horizontal resolution; pitch_up (float): Maximum elevation angle; pitch_down (float): Minimum elevation angle; Returns: list[torch.Tensor]: List of point sequences. Each sequence corresponds to a laser beam, sorted by horizontal angle. """ # Separate coordinates and reflectance values coords, refl = points[:, :3], points[:, 3:4] # Coordinate mapping (This step involves using Eq. 3 and 4 # from the manuscript to calculate rho, v, and u) rho, v, u = coords_mapping(coords, L, W, pitch_up, pitch_down) # Concatenate (rho, v, u) as auxiliary data points = torch.cat([rho, v, u, coords, refl], axis=-1) # Process each laser seq_list = [] for laser_idx in range(1, L+1): # Mask to filter points in the current laser mask = (v == laser_idx).view(-1) seq = points[mask] # Sort points by horizontal index (u) to simulate spin order spin_order = torch.argsort(u[mask].view(-1)) seq = seq[spin_order] seq_list.append(seq) return seq_list ``` Our serialization approach transforms 3D LiDAR point clouds into 1D sequences for efficient modeling. Our serialization focuses on intra-frame organization, preserving each LiDAR frame's spatial structure losslessly. Temporal order is maintained by the system’s sequential storage/transmission of frames. For example, points from a moving car in frame $t$ retain their spatial layout after serialization; their temporal continuity across frames $t$, $t+1$, etc., is ensured by the ordered bitstream arrangement, unaffected by intra-frame encoding. This decoupling guarantees both spatial fidelity and temporal coherence. We will open-source our code to ensure transparency and reproducibility for the community. **3. Comparison of Mamba Network and Transformer** Specifically, the autoregressive coding employed in SerLiC is similar to the next token prediction in natural language processing (NLP). Autoregressive models are widely recognized for their exceptional contextual modeling capabilities, but their practical application is constrained by the high complexity. While Attention is also an alternative option for sequential modeling, our findings in Supplementary section C.2 show that Mamba exhibits significantly lower complexity-by several orders of magnitude-compared to attention-based architecture. This conclusion aligns with prior works [1]. Mamba’s linear complexity and dual-parallel strategy enable real-time processing of LiDAR reflectance compression. This is the key reason for our choice of Mamba. [1] Albert Gu, et al. "Mamba: Linear-Time Sequence Modeling with Selective State Spaces." Arxiv, 2023.
Summary: This paper proposes an algorithm for LiDAR point compression using LiDAR reflectance and serialization. While many studies only use the point location which is one part of the LiDAR sensor measures, this work focuses on the necessity of the LiDAR reflectance that may involve the surface attribute, which is also one important factor for the various downstream tasks, such as 3D object detection task. Along with this concept, this paper extends to use the Mamba architecture for more efficiency and propose the entropy coding in Section 3.4. The authors provide experimental results in terms of pointcloud compression in Table 1, and this method presents a plausible compression ratio compared to the other studies. Claims And Evidence: Overall, the authors well describe the importance of the LiDAR reflectance and the authors relate the LiDAR reflectance in the compression process using the Scan serialization. Serialization is recently used in the 3D scene understanding task, and this is also addressed in the Point Transformer v3 paper. In this perspective the authors well introduce this concept into the LiDAR point compression, so I believe that the overall claim is clear. Methods And Evaluation Criteria: While the claim itself is clear, I wonder about the logical relation between LiDAR point compression and Mamba architecture. As the title said, this paper is about LiDAR point compression using LiDAR reflectance and the Scan serialization. Here, the Scan serialization is not the only property that Mamba architecture pursues. In my understanding, Transformer architecture can also encapsulate pointcloud as a 1D array, which is addressed by Point Transformer v3. Accordingly, the necessity of the Mamba architecture looks redundant and irrelevant to what the authors originally addressed in the title and the beginning of the introduction section. In terms of evaluation criteria, the authors provide profound results in table 1. I can clearly see the compression ratio of this method and it achieves a promising result compared to results from others. In the abstract and the beginning of the introduction section, the authors address the importance of LiDAR reflectance in the 3D perception task. Moreover, the teaser figure provides a failure case when the method does not use the LiDAR reflectance. However, I cannot find any quantitative experiments about this claim in the manuscript and the supplementary material. So, I would like to say that __some of the evaluation criteria are missing.__ Theoretical Claims: There are no theoretical claims that the authors provide. The authors revisit the concept of the serialization and the LiDAR reflectance. I believe that this paper is more closer to the technical paper, rather than to the theoretical paper. __So, I wonder whether this paper deserves to be reviewed as an ICML submission paper.__ If this paper is submitted to other computer vision conferences, it is okay for me. But I will __not__ consider this issue in my rating and will check the feedback from AC or PC. Experimental Designs Or Analyses: In terms of the compression, the authors well provide the quantitative results which are included in Table 1. However, the authors did not provide the results of the 3D perception task which can be an important factor why the LiDAR reflectance is a necessary measurement. In the introduction section, the authors said _"For instance, our experiments show that removing the reflectance causes a dramatic drop in pedestrian (and cyclist) detection accuracy incorporating the widely used PointPillar (Lang et al., 2019) detection model, having AP (average precision) from 51.4 (62.8) to 14.1 (34.3) on the KITTI dataset (see Fig. 1), making it impractical for use."_ Without this, I cannot find any additional results in the manuscript and the supplementary material. Table 6 of the manuscript only provides results using PointPilar. Claiming the importance of the LiDAR reflectance solely from using the PointPillar results is not enough, in my opinion. It can mislead the readers. Also, I am quite not sure why the Mamba architecture is a necessary condition for the LiDAR compression task. In my understanding, this formulation can be feasible when using Transformer architecture, like Point Transformer v3. In this perspective, the authors should provide a comparison with Scan Serialization with the conventional serialization methods. But, I cannot find the experimental results about this issue. Supplementary Material: I went through the supplementary material and read section E to check whether the authors provided profound results to support the necessity of LiDAR reflectance. As I addressed in the previous section, the results are not enough and I believe that the authors should be more dedicated to providing profound results using different and advanced models with and without using LiDAR reflectance. Relation To Broader Scientific Literature: This paper can be influential to future researchers who may use LiDAR sensors for their downstream tasks. As the authors mentioned, previous LiDAR-based perception methods mainly use the point location without using the LiDAR reflectance. Moreover, this paper also well designs the LiDAR point serialization while considering the LiDAR scan itself. So, I believe it is impactful to this field. Essential References Not Discussed: I believe that the authors should provide a technical comparison and experimental comparison with Point Transformer v3, which firstly provides a pointcloud serialization using conventional space carving algorithms. According to the authors' claims, I believe that this method can be utilized for not just 3D perception tasks, but also LiDAR compression using their serialization method. __If my understanding is correct, please provide the authors' analysis and comparison in the rebuttal.__ Other Strengths And Weaknesses: The authors well describe the way of using the LiDAR reflectance for the LiDAR compression scheme. The scan serialization looks interesting since it respects the inherent properties that LiDAR sensors have. The proposed methodology looks okay and the resulting compression benefit is highly admirable as stated in Table 1. However, my concern is closer to the necessity of introducing Mamba architecture into this compression task. With the provided writing in the introduction section, I cannot clearly catch why the authors should leverage the Mamba architecture to the task. I understand that the Mamba is more efficient than the Transformer architecture thanks to their linear computational complexity. However, such a viewpoint is more likely to say that the authors want to emphasize efficiency, rather than focusing on effective compression. I hope that the authors can describe their opinion of this issue in the rebuttal. Another weakness is that the authors address the importance of using LiDAR reflectance in the 3D perception task, which is also stated in the previous section by the reviewer. Table 6 of the supplementary material only provides results using PointPilar. Except for this, I cannot be sure of the authors' claim. Meanwhile, I am also confused about why the LiDAR compression is important to the 3D perception task itself. The experimental results imply that the authors put more weight on the efficiency. But, Table 6 and Line 53 are more about the performance itself. I am a bit confused about this. Can authors clarify this issue as well? I believe that if LiDAR reflectance itself is proven to be effective for LiDAR compression, the logic itself is good enough. What I wondered is why the statement about 3D perception is needed. I hope that I clarified my confusion to the authors. Other Comments Or Suggestions: I have no further comments. I wrote my questions and issues in the previous sections. Questions For Authors: I have no further comments. I wrote my questions and issues in the previous sections. Ethical Review Concerns: There is no ethical issue in this paper. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful feedback. Thank you for recognizing the *highly admirable compression benefit* presented in our study and acknowledging its potential to be *influential to future researchers* and *impactful to this field*. We highly value this opportunity to address your concerns. **1. Relation Between LiDAR Point Cloud Compression, LiDAR Reflectance Compression, and 3D Perception Task** **LiDAR point cloud compression** reduces data volume from 3D LiDAR scans, which consist of spatial coordinates (geometry) and reflectance values (attributes), each requiring distinct compression methods: - **LiDAR geometry compression** encodes coordinates into a compact bitstream. - **LiDAR reflectance compression** is dedicated to optimizing attribute data into another bitstream, using geometry as conditional prior. The divergent statistical properties of geometric and reflectance data have historically motivated their independent compression solutions. This paper focuses on the **LiDAR reflectance compression**, significantly improving compression efficiency for reflectance data. Existing research predominantly advances LiDAR reflectance compression through technical refinements, while overlooking a fundamental systems-level inquiry: *"Is reflectance compression necessary, and how does it impact downstream tasks?"* We first address this gap through task-driven analysis, as illustrated in the introduction section and supplementary. Our evidence confirms that reflectance data significantly influences perception performance, thereby validating its compression and transmission necessity. Having established this critical premise, we subsequently propose the coding paradigm. Our contribution not only advances compression technology but also formally links reflectance fidelity to perception needs. **2. Logical Relation Between LiDAR Reflectance Compression and Mamba Architecture** We sincerely thank the reviewer for the constructive feedback. We would like to clarify that in the compression domain, efficiency and performance are equally critical considerations. Specifically, the autoregressive coding employed in SerLiC is similar to the next token prediction in natural language processing (NLP). Autoregressive models are widely recognized for their exceptional contextual modeling capabilities [1][2], but their practical application has been constrained by the high computational complexity. While Mamba is not the only option for sequential modeling, our findings in Supplementary section C.2 demonstrate that Mamba exhibits much lower complexity-by several orders of magnitude-compared to attention-based architectures under equivalent layer stacking and channel dimensions. By combining Mamba’s linear complexity with our proposed dual-parallel strategy, SerLiC achieves real-time processing capabilities. This is the key reason for our choice of Mamba. **3. Quantitative Experiments for the Importance of the LiDAR Reflectance** Following the reviewer's suggestion, we have conducted additional experiments using another two classical models (SECOND and PointRCNN), with the experimental results presented in the table below ("w/o R" denotes the performance of the complete removal of reflectance information). ||Car|Pedestrian|Cyclist|**mAP**| |:-|:-:|:-:|:-:|:-:| |PointPillar| 78.40 | 51.41 | 62.81 | **64.21** | |PointPillar (w/o R)| 74.21 | 14.08 | 34.28 | **40.86** | |SECOND| 81.61 | 51.15 | 66.74 | **66.50** | |SECOND (w/o R)| 78.92 | 35.66 | 50.64 | **55.07** | |PointRCNN| 80.54 | 55.04 | 70.89 | **68.82** | |PointRCNN (w/o R)| 77.00 | 35.28 | 42.51 | **51.60** | Nonetheless, we respectfully clarify that the core contribution of our work lies in the development of a high-performance real-time reflectance compressor. The extensive utilization of reflectance in downstream tasks provides substantial validation for the significance of our work. **4. Compression Using Serialization Methods in Point Transformer v3** While Point Transformer v3 (PTv3) is designed for perception tasks and its shift-based patch interaction isn’t directly applicable to compression, its serialization method offers an alternative to our scan-order approach. Following the reviewer's suggestion, We conduct experiments in SemanticKITTI. Results show that Hilbert Curve (3.74 Bpp) and Z-order Curve (3.70 Bpp) underperform our scan-order serialization (3.64 Bpp), confirming the efficacy of our method. The detail table is available in [this anonymous website](https://anonymous.4open.science/r/11079-88B1/readme.md). [1] David Minnen, et al. "Joint Autoregressive and Hierarchical Priors for Learned Image Compression." NeurIPS, 2018. \ [2] Chunyang Fu, et al. "OctAttention: Octree-based large-scale contexts model for point cloud compression." AAAI, 2022. --- Rebuttal Comment 1.1: Comment: I thank the authors for the precise rebuttals. Overall, my concerns are relieved and I have no further questions about the manuscript as well as the rebuttal. Among the answers, the explanation of using Mamba looks okay and I respect the authors' design choice. In short, I raise my score to __weak accept__. Thank you for the rebuttal and your endeavors. Best,
null
null
null
null
null
null
null
null
In-Context Linear Regression Demystified: Training Dynamics and Mechanistic Interpretability of Multi-Head Softmax Attention
Accept (poster)
Summary: The authors find that a multi-head softmax attention effectively becomes a two-head softmax attention network that approximates linear attention better than a single-head softmax one. The advantage of softmax based attention over linear attention is that one does not have to train separately for different context lengths. The experiments bears out these observations. There is also some analysis to training dynamics. ## Update after rebuttal I maintain my score. Claims And Evidence: The claims are justified. Methods And Evaluation Criteria: It is mostly a theoretical paper, needing simple experimental evidence, which was provided. Theoretical Claims: The theoretical claims involve informal but reasonable Taylor expansion based arguments. Experimental Designs Or Analyses: Figure 7 provides empirical evidence of the arguments being right. Supplementary Material: Section B.1 in particular. Relation To Broader Scientific Literature: In recent past, there has been many papers analyzing in-context learning in toy problems. Many of these involve single-head attention. There are not many papers among them tackling multi-head attention. Although some of the arguments are informal, there is value in this work. Essential References Not Discussed: I did not notice obvious omissions. Other Strengths And Weaknesses: I think the analysis is useful for many but the key results were relatively obvious to me. Other Comments Or Suggestions: This seems to be well-written paper. Questions For Authors: I do not have any. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their positive feedback and appreciation of our work. Here's our clarifications on theoretical analysis. **On Informal Theoretical Claims.** We would like to clarify that most of the theoretical arguments presented in this paper are rigorous, with the exception of the analysis for Stage I in Section 4.2, i.e., theoretical derivation of feature emergence via gradient optimization. While both Stage I and Stage II admit a Taylor expansion-based argument, the analyses are derived from different perspectives. We will provide further clarification as follows. - (`Analysis for Stage I`) In Stage I, we perform a gradient-based analysis to identify the driving components in the gradient that lead to the emergence of patterns. We acknowledge that in this stage, we employ an informal argument by focusing on the first-order approximation of $\exp(\cdot)$, leveraging the small-scale initialization that renders higher-order terms negligible. To support this argument, we compare the informal theoretical conjectures with experimental results, which show perfect alignment. - (`Analysis for Stage II`) In contrast, during Stage II, we analyze the loss landscape to understand the convergence manifold. Assuming that the patterns identified in the first stage have already emerged, we examine the higher-order terms in the Taylor expansion of the loss function to demonstrate that homogeneous KQ scaling is a necessary condition for loss minimization. We emphasize that the expansion-based argument in this stage is rigorous for the following reasons: (a) the parameter scale remains small during the final stage, allowing the series summation to be analytically tractable, and (b) we derive the optimality condition for each term of the form $\langle\mu_t,\omega_t^{\odot k}\rangle$ for $k\in\mathbb{Z}$, without truncation, ensuring that the result is both rigorous and concise. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I might agree with some other reviewers that the work is somewhat limited. Having worked on this area, upon reading the paper, my first sets of feelings were: 1) Now that I have seen it, the results are obvious. 2) Why did I not think of this? I think the authors deserve credit for invoking such feelings. Hence, I would like to maintain my score. Best of luck! --- Reply to Comment 1.1.1: Comment: Thanks for your kind appreciation of our work!
Summary: The paper provides an comprehensive understanding of multi-head softmax attention in conducting ICL for linear regression tasks. Through empirical investigation, it observes the specific patterns for the optimal parameters within in the multi-head transformer structures and the superiority of multi-head over single-head. Based on the observations of the specific patterns, it reparameterizes the attention models to conduct theoretical understanding of the training dynamics and expressive power of multi-head softmax attention. It shows that multi-head softmax attention emulates preconditioned gradient descent and achieves near-Bayesian optimality in in-context learning. Additionally, the paper extends its findings to non-isotropic covariates and multi-task regression and also provide comparison with linear attention. Claims And Evidence: In general, the claims made in the submissions are clear and well-written. The paper provides empirical evidence for the observed specific patterns for the optimal configuration of multi-head softmax attention and the superiority of multi-head attention. For theoretical results, it also provides some empirical validations and proofs of the statements. Methods And Evaluation Criteria: The paper did not propose methods / evaluation criteria. Theoretical Claims: I have checked the sketch of the proofs for the main theorems which are shown in Appendix. I did not observe mistakes in the proofs. Experimental Designs Or Analyses: I have checked the details of the experiments setting, results and analysis in Section 3 (and some further explanations and discussion in Appendix). In general, the empirical settings are reasonable and align well with the design of theoretical settings, and I did not observe particular issues. Supplementary Material: I review the additional background and related works, as well as the additional empirical and theoretical results, and took a look at the general idea of the proofs of the theory. Relation To Broader Scientific Literature: The key contribution of the paper, claimed by the authors, includes identifying the optimal configuration of multi-head softmax attentions for in-context linear regression, analyzing the training dynamics and expressive power, and demonstrating its advantages over single-head models. The analysis builds on prior studies of ICL for linear regression using transformers. **However, a major concern regarding this paper is its novelty, as many of its contributions appear to have been covered by previous works.** For instance, [1] has already analyzed the training dynamics of multi-head softmax attention in ICL, discussing optimality through upper/lower bounds on the ICL loss and providing results based on Bayesian risk analysis. Additionally, [2] has presented similar findings regarding the optimal configuration, such as the diagonal structure of the QK matrix. [2] has also compared multi-head and single-head attention and extended the discussion to external scenarios such as non-Isotropic covariates (correlated features). [1] Chen, S., Sheen, H., Wang, T., & Yang, Z. Training dynamics of multi-head softmax attention for in-context learning: Emergence, convergence, and optimality. COLT 2024. [2] Cui, Y., Ren, J., He, P., Tang, J., & Xing, Y. Superiority of multi-head attention in in-context linear regression. arXiv preprint arXiv:2401.17426. 2024. Essential References Not Discussed: This paper focuses on investigating the training dynamics and expressive power of multi-head softmax attention. [2] investigating similar topics but the paper did not cite and compare with [2]. In addition, given that [1] also investigates the training dynamic and optimality, the paper did not provide a very clear explanation about the difference between this work and [1]. [3] also have a close relation to the content of this paper. These works are highly related and important, and it is necessary to discuss them for a careful literature review. [1] Chen, S., Sheen, H., Wang, T., & Yang, Z.. Training dynamics of multi-head softmax attention for in-context learning: Emergence, convergence, and optimality. COLT 2024. [2] Cui, Y., Ren, J., He, P., Tang, J., & Xing, Y. Superiority of multi-head attention in in-context linear regression. 2024. [3] Li, H., Wang, M., Lu, S., Cui, X., & Chen, P. Y. How do nonlinear transformers learn and generalize in in-context learning? ICML 2024 Other Strengths And Weaknesses: Strength: 1. In general, the paper is well-written and easy to follow. 2. The analysis is comprehensive, as it involves both empirical and theoretical results of multi-head softmax attention, and cover different perspectives, including optimal model configuration, training dynamics, expressive power, comparison between multi-head / single-head attentions, comparison with linear attention, and extensions to non-isotropic covariates and multi-task regression. Weakness: 1. As indicates in the previous review section, the novelty of the paper is limited, as the main contribution has been covered by previous papers [1][2]. 2. The paper observers the optimal configuration of softmax multi-head attentions by only empirical observations. In contrast, [2] provides theory showing the diagonal property of the optimal QK weight and other configurations. Additionally, compared to [1] and [2], the theoretical analysis in this paper reparameterizes the attention parameters into just two vectors, significantly simplifying the complexity of the theoretical derivation. [1] Chen, S., Sheen, H., Wang, T., & Yang, Z.. Training dynamics of multi-head softmax attention for in-context learning: Emergence, convergence, and optimality. COLT 2024. [2] Cui, Y., Ren, J., He, P., Tang, J., & Xing, Y. Superiority of multi-head attention in in-context linear regression. 2024. Other Comments Or Suggestions: 1. It is suggested that the authors can provide some practical implications of the findings in this paper for in-context learning on real-world text data. Understanding this is important, as the theoretical analyses focus on simplified settings, and their relevance to real NLP tasks remains uncertain. 2. The paper considers that for multi-head attentions, the outputs of all heads are summed. Would incorporating a learned weighted sum for each head further improve the prediction? Intuitively, it can provide more adaptability by allowing the model to assign different importance to different heads. Additionally, the paper states that increasing the number of heads beyond H = 2 provides no additional benefit. If a weighted sum is used, could further increasing the number of heads lead to performance improvements? 3. The training dynamics analysis is based on a reparameterized model. If I understand correctly, it assumes that throughout the entire training process, the QK matrix is constrained to always have a diagonal structure, with only its diagonal values being trained. Although empirical results suggest that the optimal configuration after training is indeed diagonal, this simplification deviates from the actual training process, potentially oversimplifying the learning dynamics and failing to capture the complex interactions that may emerge in real transformer training. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comments and assessments. Here's our response to the comments. **On Comparision with Existing Work.** - **Comparison with [1].** - (`[1] corresponds to single-head case in our paper`) [1] considers the multi-head attention for **multi-task** linear regression, where the number of heads equals the number of tasks. Under a specialized initialization, [1] proves that **each head learns to handle one task**, corresponding to the **single-head case** in our paper. In contrast, we consider a more flexible setup in which multiple heads can be freely allocated to a single task, allowing the model architecture to be more expressive and complex. As a result, while [1] shows that the single-head model learns a **nonparametric** predictor with KQ scaling as $1/\sqrt{d}$, we demonstrate that the multi-head model learns a **parametric** GD predictor with KQ converging to $0^+$. Hence, the analysis are **fundamentally different**. - (`[1] also adopts reparametrization`) Note that [1] assumes decomposable weights (Definition 3.1 in [1]), which ensures that the eigenvector space remains fixed. Thus, the analysis in [1] also employs a **reparameterization**, showing that diagonalizability is preserved along the gradient flow and focusing on the eigenvalues, i.e., the diagonal entries in the isotropic case. - **Comparison with [2].** - (`Empirical Insights`) [2] only identifies the diagonal KQ patterns with potentially positive and negative values for **two-head** attention, as well as identical performance when the number of heads exceeds two. We further **quantitively** figure out the detailed **sign-matching**, **homogeneous KQ magnitude**, and **zero-sum OV** patterns beyond $H=2$, and reveals that the multi-head softmax attention learns to implement **GD predictor**, which are not covered in [2]. Thus, we establish a more complete understanding. - (`Theoretical Insights`) We provide a comprehensive explanation by analyzing *how patterns emerge* through **training dynamics**, as well as *how the trained model operates* via a **function approximation and optimality analysis**. While [2] considers full-model parameterization, it focuses solely on a loss approximation similar to our Proposition 4.1. However, it does not capture the training dynamics and function approximation----that we develop based on the loss approximation results. Moreover, the main result in [2]—the superiority of multi-head over single-head attention—is a **natural corollary** of our findings, since we show that single-head attention learns a nonparametric kernel regressor, whereas multi-head attention learns a parametric GD. By applying the approximate loss from Proposition 4.1 with $H = 1$ and $H = 2$, we can reproduce the result. Finally, [2] only analyze KQ parameters with OV parameters fixed to the corresponding optimal solution, while we analyze with both KQ and OV parameters. - **Comparison with [3].** [3] focus on the binary classification tasks while we work on the regression task, marking a clear distinction. **On Real-Word Implications.** Understanding what transformers learn for specific task serves as a starting point for investigating general ICL meachanism. The theoretical and emppirical insights lay the foundation for understanding how deep models learn language. **On Transformer Architecture.** In this paper, we aim to understand the mechanism of the **standard** multi-head attention architecture and the weighted sum of head is beyond the scope. Also, $\mu^{(h)}$'s naturally act as "weights" for each head, and are expected to learn the "optimal" weight during the training. Thus, we do not anticipate any improvement. **On Reparametrized Model.** As shown in Observation 4 of §3, the attention model develops a diagonal-only pattern in the KQ circuit and a last-entry only pattern in the OV circuit during the **early stages of training** and then continues optimizing within this regime. This indicates that a diagonal parameterization is sufficient to capture the core behavior of model training. Importantly, we **do not impose any diagonal constraints** on the KQ parameters—the diagonal pattern **emerges naturally** during **full-model** training and persists throughout (see Figure 4). Moreover, if the KQ parameters are initialized as a diagonal matrix, this structure is preserved over the course of training. We thank the reviewer for pointing out the related work, and we will incorporate these comparisons in the revised version. [1] Chen, S., Sheen, H., Wang, T., & Yang, Z.. Training dynamics of multi-head softmax attention for in-context learning: Emergence, convergence, and optimality. COLT 2024. [2] Cui, Y., Ren, J., He, P., Tang, J., & Xing, Y. Superiority of multi-head attention in in-context linear regression. 2024. [3] Li, H., Wang, M., Lu, S., Cui, X., & Chen, P. Y. How do nonlinear transformers learn and generalize in in-context learning? ICML 2024 --- Rebuttal Comment 1.1: Comment: Thanks for the response! Some of my concerns are resolved and I update the score accordingly. However, I believe in the current version, the discussion of comparison with existing works is insufficient. Through quick search, I found additional related works which are missing, e.g., [1,2]. Together with postponing related work section in appendix, all these could mislead the readers in understanding the contribution of this paper. Could you please put the related work section in the main paper, and revise it comprehensively? I understand that ICML does not allow revising pdf directly. I would like to at least take a look at the revised paragraphs. Then I would like to further update the score. [1] Linear Transformers with Learnable Kernel Functions are Better In-Context Models [2] In-Context Learning of Polynomial Kernel Regressionin Transformers with GLU Layers --- Reply to Comment 1.1.1: Comment: We thank the reviewer for pointing out the overlooked related work. Below is the revised discussion, which we will incorporate into the main article in the updated version. Due to word constraints in the rebuttal, we may omit the full list of references here. **In-Context Linear Regression.** To better understand how transformers acquire ICL abilities, researchers study the linear regression task, examining both the model’s expressive power and training dynamics. The pioneering work of Garg et al. (2022) empirically investigates the performance of the transformer on linear regression, demonstrating that transformers achieve near Bayes-optimal results. Von Oswald et al. (2023) studies a simplified linear transformer and reveals that it learns to implement a gradient-based inference algorithm. From a theoretical perspective, Zhang et al. (2024); Ahn et al. (2023a) show that one-layer linear attention provably learns to perform preconditioned gradient descent, using training dynamics and loss landscape analysis, respectively. Furthermore, Chen et al. (2024a) provides the first theoretical insight into standard softmax-based attention, showing that under certain initialization schemes, the trained model converges to a kernel regressor. Concurrently, Cui et al. (2024) examine how multi-head softmax attention learns linear regression in context, identifying a learned diagonal KQ pattern with both positive and negative entries from experimental perspective. In addition, Bai et al. (2024) explores the expressive power of transformers to implement various linear regression algorithms. Recent studies also examine how transformers handle variants of linear regression, including two-stage least squares for addressing data with endogeneity (Liang et al., 2024), adaptive algorithms for sparse linear regression (Chen et al., 2024c), EM-based learning of mixture of linear models (Jin et al., 2024), and multi-step gradient descent within the loop transformer architecture (Gatmiry et al., 2024). Besides, Aksenov, et al. (2024) and Sun, et al. (2025) are also broadly related to our work, in which they investigate the role of the nonlinear softmax activation within the context of regression tasks. **Comparison with Related Work.** We provide a detailed discussion of the differences between our work and that of Chen et al. (2024a) and Cui et al. (2024), which are among the most closely related studies. Different from our setup, Chen et al. (2024a) considers multi-head attention in the context of multi-task linear regression, where the number of heads matches the number of tasks. Under a specialized initialization, they show that each head independently learns to solve a distinct task--effectively reducing to a single-head per task setup, which corresponds to the single-head case in our framework. In contrast, our setup allows multiple heads to be flexibly allocated to a single task, enabling a more expressive and complex model architecture. As a result, while Chen et al. (2024a) shows that the single-head model learns a nonparametric, kernel-type predictor with scaling of KQ parameters as $\Theta(1/\sqrt{d})$, we demonstrate that the multi-head model instead learns a parametric gradient descent predictor with KQ converging to $0^+$. This not only recovers the known results for linear attention (e.g., Zhang et al., 2024) but also reveals that multi-head softmax attention can outperform the single-head one by effectively encoding the linear architecture through an explicit approximation. The work of Cui et al. (2024) identifies the diagonal KQ patterns with potentially positive and negative values in two-head softmax attention, and observe identical performance when the number of heads exceeds two. We go further by quantitatively characterizing the learned model. Specifically, we reveal detailed sign-matching, homogeneous KQ magnitudes, and zero-sum OV patterns for head counts beyond $H=2$, and show that multi-head softmax attention effectively learns to implement a gradient descent predictor. From a theoretical perspective, Cui et al. (2024) adopts full-model parameterization and conducts a loss landscape analysis. In contrast, we begin by establishing an approximate loss and then develop a comprehensive explanation based on training dynamics, function approximation, and optimality analysis. Our results reinforce and go beyond the core argument in Cui et al. (2024) regarding the superiority of multi-head over single-head attention: we not only compare the testing loss but also explicitly demonstrate that single-head attention learns a nonparametric kernel regressor, while multi-head attention learns a more powerful parametric gradient descent predictor.
Summary: This paper investigates the training dynamics of multi-head softmax attention in in-context learning. Through experimental analysis, the authors discover two key patterns: (1) QK weight matrices develop a diagonal structure, with diagonal elements being nearly uniform across all heads, and (2) QK weights and effective OV weights share the same sign, with their average approaching zero. Based on these findings, they propose a simplified training model with 2H parameters and describe the training process in two distinct stages. The authors also analyze the expressive power of their simplified multi-head attention model, comparing it with single-head attention, and validate their findings through extensive experimentation. Claims And Evidence: The claims are well-supported by both theoretical analysis and empirical evidence. Methods And Evaluation Criteria: The methodology and evaluation criteria are sound and appropriate. Theoretical Claims: While I haven't examined all proofs in detail, the theoretical framework appears well-documented and logically sound. Experimental Designs Or Analyses: The experimental designs are robust and well-constructed, providing sufficient evidence to support the paper's conclusions. Supplementary Material: No supplementary material was included in this submission. Relation To Broader Scientific Literature: This research contributes to our understanding of large language models by examining how multi-head softmax attention networks learn to perform in-context linear regression, offering valuable insights into LLM training processes and mechanisms. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strength**: 1. The paper effectively uses experimental analysis to reveal parameter patterns in trained softmax attention models, providing insights into both training dynamics and operational mechanisms. 2. The research offers a thorough comparative analysis of softmax and multi-head attention architectures. ** Weaknesses**: 1. The analysis of multi-head attention training dynamics relies heavily on intuitive reasoning, with Appendix C limiting its formal analysis to two-head attention only. Other Comments Or Suggestions: If there are any misunderstandings on my part, please point them out, and I will reconsider my evaluation of this work. Questions For Authors: Regarding the apparent discrepancy between theoretical predictions and experimental results: While the analysis in Appendix C suggests w should approach 0, Figures 2(b) and 1(b) show convergence to a non-zero constant. Could you explain this apparent contradiction? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive feedback and appreciation of our work. Here's our response to the questions. **On Heuristic Derivation and Two-Head Gradient Flow.** We acknowledge that in Section 4.2, we employ a heuristic derivation to explain how the observed patterns emerge during gradient optimization, using an expansion-based argument. To establish a more rigorous foundation, we analyze the gradient flow in Appendix C to explain why these patterns persist once they emerge and to characterize the trajectory of the parameter evolution. - (`Multi-head Attention Effectively Acts as Two-head Model`) Furthermore, our experiments (see Figure 3) demonstrate that a multi-head model essentially implements the same estimator as a two-head model. As detailed in Equation (4.5), on the solution manifold the multi-head model ultimately separates into two distinct head types, determined by the signs of $\mu^{(h)}$ and $\omega^{(h)}$. Within each group, the heads behave equivalently to a single head with $\omega = \omega^{(h)}$ and a $\mu$ equal to the sum of the $\mu^{(h)}$ values within that group. Thus, analyzing the dynamics of a two-head model provides all the essential insights. - (`Dynamics are Similar Under Symmetry`) Besides, note that if we initialize a multi-head model with identical absolute values for $\mu$ and $\omega$ (see Definition C.1) and randomly assign each head a positive or negative sign, then as the number of heads increases, each type group is highly likely to contain nearly the same number of heads. By symmetry, tracking the effective KQ and OV parameters within each group is equivalent to analyzing a two-head model. **On Discrepancy between Theoretical Predictions and Experimental Results.** - (`Experimental Justification: Small KQ already Behaves as the Limiting Model`) As shown by the theoretical results in Section 4.3, when KQ approaches 0, the model exactly implements the GD predictor. Therefore, we compare the performance—measured in terms of mean squared error (MSE)—of the learned model with small KQ against that of the GD predictor. Figure 3(b) demonstrates that the loss curves for the learned transformer model and the GD predictor coincide, indicating **identical performance** and confirming that the approximation (see Section 4.3) holds well even for a small constant level KQ. Moreover, because the model is trained via gradient optimization, the **negligible loss gap** between the small KQ multi-head attention and its limiting case suggests that further optimization from a small constant KQ to the limit of 0 would require an exceptionally long time —likely exceeding the number of training steps used in our experiments. - (`Potential Meachanism: Edge of Stability`) In practice, the particular magnitudes of KQ parameters learned by the optimization algorithm depends on particular choices of learning rate, training steps, and batch size (if SGD or Adam is applied). This is partly due to the *"river valley"* loss landscape near the global minimum (see Figure 5(b)). From the theoretical perspective, this phenomenon can be attributed to the **edge of stability** (e.g., [1]), which is commonly observed in neural network training. In this regime, gradient descent progresses non-monotonically, oscillating between the “valley walls” of the loss surface and failing to fully converge to the minimum. Experimental results indicate that choosing a *smaller learning rate*, a *larger batch size*, and *sufficiently many training steps* can drive the KQ parameters to eventually approach zero. We would thank the reviewer for the nice questions and we will incorporate these clarifications in the revised version. [1] Cohen, J. M., Kaur, S., Li, Y., Kolter, J. Z. and Talwalkar, A. (2021). Gradient descent on neural networks typically occurs at the edge of stability. arXiv preprint arXiv:2103.00065.
null
null
null
null
null
null
null
null
The Expressivity of Fixed-Precision Transformers without Positional Encoding
Reject
Summary: The paper explores the expressive capabilities of transformer decoders constrained by a fixed-precision setting, such as a specific floating-point arithmetic, and with limited positional embedding, utilising formal language theory. Specifically, the paper posits that if a particular assumption concerning the query and key matrices of the examined transformers is satisfied (Assumption 5.1), then transformers devoid of positional embedding recognise only finite or cofinite languages (Section 5), whereas transformers with an absolute positional embedding can recognise all cyclic languages. It is further contended that transformers not conforming to Assumption 5.1 recognise letter set languages. ## update after rebuttal No changes, I stand with my initial review. Claims And Evidence: While I do not doubt the general correctness of the results, I find the proofs in this paper unconvincing for a few reasons: - The precise definition of the transformer decoders considered in this paper is somewhat vague. Although the details are just enough to understand the upper bounds, the lower bounds require more clarification. Specific aspects, such as the type of attention mechanism employed, are not explicitly defined. I felt like these details are spread throughout the paper, but the absence of a succinct definition that encapsulates these elements, along with theorems that clearly state the established results, makes it difficult for me to be certain about the contribution of this paper. - There are several details that leave me puzzled. For example, in Definition 4.2, what does \sigma represent? I assume it stands for the input word, previously referred to as w. Similarly, in Definition 4.3, what does (sep) stand for? Could it be (EOS)? While these are minor issues, similar ambiguities scattered throughout the main paper impede a comprehensive understanding. - It strikes me as odd that there are no extensive formal proofs provided. Often proofs seem more like sketches (for example Theorem 6.1) or are distributed across an entire section, which does not aid comprehension. Additionally, there is no technical appendix to support these more informal arguments presented in the paper. - I'm finding it difficult to grasp the importance of Assumption 5.1. The paper mentions that this assumption "generally holds for most trained transformers," yet I remain sceptical. For instance, in the context of using a transformer in a low-bit setting, it's conceivable that an overflow situation might occur, resulting in reaching either -infty or infty. Methods And Evaluation Criteria: See above regarding my comment on the form of proofs provided. Theoretical Claims: I endeavoured to understand the results but found it challenging due to a lack of clarity regarding the specific setting considered in these outcomes. Experimental Designs Or Analyses: Not applicable. Supplementary Material: Not applicable. Relation To Broader Scientific Literature: This paper is well-placed in an active line of research concerned with understanding the expressive power of transformers using formal language theory. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: strengths: - The paper offers intuitive explanations and informal descriptions wherever feasible. - Overall, despite crucial technical details, the core contribution of the paper is relatively clear. weaknesses: - While it's novel, I find the scenario where a transformer, devoid of positional embedding and constrained by fixed-precision arithmetic, leads to finite or cofinite languages, somewhat predictable. I am not aware of any existing proofs on this exact setting, thus its new, but I am not entirely convinced of its significance. Other Comments Or Suggestions: None besides those stated above or posed as question below. Questions For Authors: I recognise that my critique regarding the lack of details might be somewhat severe. Could you clarify the exact definitions of: - Absolute positional embeddings? - What is the precise class of transformers that contribute to the lower bounds you present? Aside from this: - Why have you opted to provide only a proof sketch for Theorem 6.1? I am struggling to see how the technical details merit this approach. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive feedback. We greatly appreciate your careful reading and insightful suggestions, which have helped us clarify key aspects of the paper. Below, we respond point by point to the concerns raised. # Weaknesses ## 1. Inprecision of definition of Transformer, unclear symbol in definitions - We acknowledge that our definitions of Transformer components, as well as our explanations for some symbols (e.g., $\sigma$, <sep>), were insufficient. We appreciate the reviewer pointing this out, and we will revise all relevant parts of the manuscript to provide more precise and consistent definitions throughout. ## 2. The validity and importance of Assumption 5.1 - validity - We agree that Assumption 5.1 may not always hold—particularly in low-bit settings. However, our intention was to model more typical settings found in practical scale models such as BERT or LLaMA, where floating-point precision (e.g., FP16 or FP32) is generally sufficient. - In cases where precision is severely limited, we expect this assumption to break down, which is precisely why we chose to state it explicitly as an assumption. - importance - The reviewer notes that a Transformer without positional encodings and with fixed-precision arithmetic yielding only finite or cofinite languages may appear somewhat expected. However, this outcome does not follow from those conditions alone. As shown in Table 1, even under fixed precision and NoPE, the expressivity exceeds FinCofin unless Assumption 5.1 is also imposed. With this assumption, the expressivity becomes exactly FinCofin, both as an upper and lower bound. - In this sense, Assumption 5.1 plays a critical role in identifying a tight boundary on expressivity. We view our contribution as providing a theoretical grounding for how fixed-precision constraints shape the model’s representational limits and establishing FinCofin as the precise limit under minimal yet meaningful assumptions. # Question for Authors ## 1. (Clarification) Absolute positional embeddings - In our setting, we treat APE as a function of the form $\mathrm{APE}: \mathbb{N} \to D_p^d$ which maps each position to a $d$-dimensional vector in fixed-precision float. This encoding is added element-wise to the token embedding before being passed into the Transformer block. - Note that under this definition, learnable positional embedding, such as those used in GPT-2—are considered a subclass of APE. We will revise the manuscript to clearly include this definition. ## 2. (Clarification) Transformers which contribute to the lower bound - We describe the architecture used to establish the lower bound in Section 4.4. This includes the model’s structure and computational setting. (please ignore "and attention head" in line 189) - If there are specific aspects that were unclear or insufficiently detailed, we would greatly appreciate your guidance on which parts should be clarified. We are happy to revise the manuscript accordingly. ## 3. Theorem 6.1 will be exclued - We sincerely thank the reviewer for raising this important issue. We agree that the current proof sketch for Theorem 6.1 lacks sufficient formal detail, and that the argument, as presented across Section 6.1, does not aid comprehension. - Due to time constraints during the submission process, we included only a high-level idea without a fully formalized proof. - We found that the lower bound for APE is, in fact, no stronger than that for NoPE under our current assumptions. While a tighter lower bound may exist, it likely requires a different construction and falls outside the scope of this work. - Accordingly, we plan to exclude Section 6.1 entirely from the final version. We will instead mention this direction briefly as a possible avenue for future research. We believe this change will improve the overall clarity and integrity of the paper. # Summary - Once again, we are grateful for your detailed review and recognition of the core contributions of our work. We believe the revisions we plan to make based on the comments. Particularly regarding the clarity of definitions, the role of key assumptions, and the exclusion of Section 6.1. - Please feel free to let us know if there are further points that would benefit from clarification.
Summary: The authors study the expressivity of transformers while considering the practical constraints of real-world usage, such as fixed-point precision. The authors show that without positional encoding, transformers can only represent finite and co-finite languages, which are subclasses of regular language. Adding positional encoding improves the expressivity but does not alleviate the main limitations due to fixed-point precision. The authors conclude with a detailed discussion, highlighting the gap between theoretical models and models used in practice. Their analysis leads to the conclusion that transformers, because of fixed-point precision and other practical limitations, are akin to very efficient look-up tables that can express finite and co-finite languages. Claims And Evidence: The theoretical claims are supported by clear and detailed proofs. Methods And Evaluation Criteria: The authors provide theoretical results to better understand the expressivity of transformers. The model is simplified to remain in a controlled setting, and the method seems to make sense for the problem at hand. Theoretical Claims: The theoretical findings are supported by detailed and clear proofs. Experimental Designs Or Analyses: There were no experiments conducted. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: I believe that the related work and prior works are well introduced and compared. The submission's contributions are interesting and are part of a growing interest in the literature to better understand the expressivity of transformers. The novelty seems to be in the analysis from a formal language perspective with realistic assumptions on the models, notably the fixed-point precision one which is indeed a real limitation in practice. Essential References Not Discussed: To the best of my knowledge, there were no essential references not discussed in the current submission. Other Strengths And Weaknesses: **Strengths** - The paper is well-written, and the problem tackled seems of great interest - The findings are contextualized and explained to grasp the main intuitions - I appreciate that the authors consider a setting as realistic as possible - I appreciate the discussion and "possible extensions" parts that list the limitations of the current submission and potential directions for future work. This helps understand the contributions of the submission without over-stating them **Weakness** I list below what I think are weaknesses, but I would be happy to be corrected if I misunderstood some important aspects of the authors' contributions. - Some technical parts on the formal language part are not well-detailed, notably the FOC[+; MOD] ones, which makes it hard to grasp all the impact of the submissions' contributions for non-experts - Most transformers used in practice, notably in language tasks, use learnable positional encoding or other more involved frameworks (e.g., ROPE [1]) that consist of causal attention with decoder-only blocks. In the current simplified setting of the submission, it is not clear, at least to me, how the insights would translate to those more common models. - It is not clear, at least to me, if the submission conclusion is informative of what we observe in practice, that is, that transformers show surprising generalization and emerging capabilities. I am doubtful that efficient look-up tables would match such capabilities, but I would be happy to hear what the authors think of that. Overall, I find the paper well-written, and the contributions seem valuable, although the setting considered is simplified. This is the reason for my score, but I am ready to modify it depending on the discussion with the authors. *References* [1] Su et al. RoFormer: Enhanced Transformer with Rotary Position Embedding, arXiv 2023 Other Comments Or Suggestions: I list below some potential typos: - Table 1: "Uppoer" --> "Upper" - Table 1 caption: "positinoal" "positional" - l. 119, second column: "theoretic" --> "theoretical" - l.165: Given that GPTs use causal attention, should it be "Since decoder-only Transformer" instead of "Since encoder-only Transformer"? - l.167: "absolute" --> "absolute" - l.165: "generation," --> "generation." - l.400: ", In this study, We adopted" --> "In this study, we adopted" - l. 426, first column: "Languge Modeling" --> "Language Modeling" Questions For Authors: 1) In practice, most transformers make use of positional encoding, either deterministic or learnable or more complex ones (ROPE). How would the insights translate to such a model given that they can mitigate the permutation-onvariance of attention modules? 2) Same question for transformers used in practice, notably in language tasks, that use causal attention in decoder-only blocks. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your detailed and thoughtful review. We are encouraged by the recognition of key strength and we appreciate the thoughtful suggestions for improvement. We greatly appreciate your recognition of the strengths of our work, as well as your thoughtful suggestions for improvement. # Weaknesses ## 1. On the lack of explanation for FOC[+; MOD] and related language classes: - We agree with the reviewer that Table 1 refers to FOC[+; MOD], but the manuscript does not include even a brief explanation. This is an important oversight, especially for non-experts. We will revise the paper to include concise definitions and necessary backgrounds so that the impact of our results can be better understood. Thank you for pointing this out. ## 2. On the applicability of our results to more realistic Transformer architectures (e.g., with learnable PE or RoPE): - We address this concern in more detail under the “Questions for Authors” section below. ## 3. On the practical relevance of our conclusions, especially with respect to generalization and emergent capabilities: - As also noted in our response to Reviewer Je32, we acknowledge that the term “lookup table” was overstated. While valid in a theoretical sense under fixed precision, it does not fully capture a model’s generalization capacity. We view expressivity and generalization as distinct: even under strict constraints, a model can still generalize within its representable class. - Although our results do not directly explain emergent behaviors in large-scale Transformers, they provide a foundation for understanding expressivity under resource constraints—an important step toward formalizing learnability. We hope our simplified framework helps clarify how architectural choices affect expressivity and contributes to bridging theory and practice. # Other Comments Or Suggestions - We also thank the reviewer for pointing out minor typos and expressions a lot that could be improved. - > l.165: Given that GPTs use causal attention, should it be "Since decoder-only Transformer" instead of "Since encoder-only Transformer"? - As noted in our response to Reviewer Je32, our intention was indeed to contrast the decoder-only setup with encoder-based models, but the sentence as written was confusing. We will revise the text to clarify our intended meaning. # Questions For Authors ## 1. The effects of positional encodings for the permutation-invariance of attention modules > In practice, most transformers make use of positional encoding, either deterministic or learnable or more complex ones (ROPE). How would the insights translate to such a model given that they can mitigate the permutation-invariance of attention modules? - We appreciate this important question regarding the extension to more realistic positional encoding schemes. In fact, learnable positional encodings were implicitly treated in our framework as a form of absolute positional encoding (APE). In the manuscript, we focused on periodic APE as a representative idealized case. - As for RoPE, it introduces a structurally distinct behavior compared to NoPE and APE. Although a detailed analysis of RoPE is outside the scope of this work and would require separate treatment, we are also interested in this direction and plan to investigate RoPE-based architectures in future work. ## 2. Decoder-Only Transformers and Expressivity > Same question for transformers used in practice, notably in language tasks, that use causal attention in decoder-only blocks. - Regarding decoder-only architectures, our setting assumes $\Omega(1)$ decoding steps. Therefore, our results apply to any number of decoding steps that scale with the input length. - Importantly, our results show that in fixed-precision settings, the number of decoding steps does not affect the model’s expressivity. - In this context, causal attention enables NoPE Transformers to infer the relative positions of input tokens, partially compensating for the lack of explicit positional encodings. - That said, we recognize that generalizing our results to broader architectural variations remains a challenging and important direction. In future work, we hope to extend the framework to cover other types of positional encodings and as well as additional architectural features such as sparse attention or grouped-query attention. # Summary - We again thank the reviewer for their thoughtful feedback—including the helpful identification of typos, and kind recognition of our work’s strengths. Your comments have helped us improve both the clarity and positioning of our contributions. - While our setting is simplified, we believe it offers a useful foundation for understanding expressivity under resource constraints, and we will revise the manuscript to better explain technical details and clarify connections to practical models. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed answers and for their efforts to address my concerns. I will consider that along with the other reviews for my final recommendation.
Summary: This paper investigates the expressivity of the transformer architecture when it is constrained to operate at fixed numerical precision and not involve any infinite parameters -- a setup seemingly matching real-world setups closely. Perhaps surprisingly, the results indicate that the architecture can only recognize finite and co-finite formal languages, suggesting that the architecture is ultimately limited to finite memorization. ## update after rebuttal I stand by my original assessment that the relevance to real-world models remains unclear. Claims And Evidence: I believe that the mathematical theorems in the paper are correct. However, I am unsure about the implications for machine learning that the paper claims. The abstract suggests that practical transformers “effectively function[] as little more than efficient lookup tables”. This conclusion seems entirely at odds with a variety of work finding transformers to empirically show nontrivial generalization, e.g. length generalizing in algorithmic tasks [1,2,3] or generalizing to unseen in-context learning tasks [4]. While real-world transformers undeniably operate in fixed precision, it remains unclear how useful this asymptotic is for understanding empirical behavior. Note that given plausible precision and size of real-world transformers, the sizes of the finite languages that they can potentially model will still be extremely large, large enough to lead to nontrivial algorithmic generalization at finite lengths. Experiments backing up the relevance of the conclusions to actually implemented machine learning at reasonable input lengths would be a way of helping make the claim that transformers, due to the theorems proven here, are indeed "little more than efficient lookup tables". [1] Zhou et al, What Algorithms can Transformers Learn? A Study in Length Generalization, ICLR 2024 [2] Kazemnejad et al, 2024 (cited in the paper) [3] Huang et al, A Formal Framework for Understanding Length Generalization in Transformers, ICLR 2025 [4] Garg et al, What Can Transformers Learn In-Context? A Case Study of Simple Function Classes, http://arxiv.org/abs/2208.01066 Methods And Evaluation Criteria: N/A (no empirical evaluation). Theoretical Claims: The theorems appear sound to me. I believe the theorems to be correct based both on my reading of the proofs and on my experience in this subfield. Experimental Designs Or Analyses: N/A (no empirical experiments). Supplementary Material: None Relation To Broader Scientific Literature: The paper links up to a recent line of work on the theoretical expressiveness of the transformer architecture. It is already known that transformers with finite precision face strong limitations (Proposition 1 in Merrill&Sabharwal 2024a, cited in the paper). Whereas that reference took this result as evidence to look for other theoretical models, the current paper strengthens this result in the NoPE setup by showing that in fact the expressivity equals the class FinCofin. Essential References Not Discussed: I believe all key references are discussed. Other Strengths And Weaknesses: In my view, the primary weakness is that, as described under "Claims and Evidence", it is unclear whether the strong limitations (effectively precluding generalization to unbounded-length inputs) are relevant at all to real-world training of transformers. After all, one could just as well conclude that standard computers (which, grounded in the physical world, have finite RAM) can only compute finite-state languages, whereas there is a general consensus that real computers are Turing-complete for most practical purposes. Thus, in the absence of experiments or further non-asymptotic results, the relevance of the results to machine learning remain unclear. Other Comments Or Suggestions: Clarification: line 142 (left) “such as“: are any other special values than +inf and -inf included? Line 140 (right column): “there is no intersection to alphabet in this study” – I don’t understand what this means, can the authors clarify? Line 231: “holds for most trained transformer models” – why “most” and not “all”? In real-world transformers, aren’t all weights finite? Questions For Authors: How do the results relate to results relating finite-precision transformers to C-RASP [5] and AC0 [6], which are far larger classes than FinCofin? Do the results from the present paper subsume those, or are there differences in the details of the formalization? [5] Yang and Chiang, Counting Like Transformers: Compiling Temporal Counting Logic Into Softmax Transformers, COLM 2024 [6] Feng et al, How Numerical Precision Affects Mathematical Reasoning Capabilities of LLMs, arxiv 2024 Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We are grateful to the reviewer for the constructive and insightful feedback. # Weaknesses: We thank the reviewer for pointing out the apparent contradiction between our claim ("fixed-precision Transformers behave as efficient lookup tables") and the observed generalization in experiments. We acknowledge that the phrase "lookup table" in the abstract may have been misleading. Our intent was to highlight theoretical limitations under fixed precision, not to imply literal memorization. Our analysis suggests that when precision is insufficient, fixed-precision Transformers cannot approximate the infinite-precision behavior seen in prior work. While we did not fully address what constitutes "sufficient" precision in the current manuscript, we plan to explore this in future work, as the reviewer helpfully suggested. Although it may seem trivial, our contribution is to understand how architectural choices within the Transformer family influence the classes of languages that can be represented—even under the same finite-precision assumptions as a boundary analysis that explicitly identifies how changes—such as removing or adding architectural components—impact expressivity. # Clarifications We thank the reviewer for these helpful clarification requests. Below, we address each of the reviewer's questions in turn. 1. Line 142 - Special values in floating-point numbers: - Any other special values except $+\infty$ and $-\infty$? - In our manuscript, we only intended to refer to the explicitly mentioned values: $+\infty$, $-\infty$, and $\mathrm{nan}$. - Within the scope of our theoretical setting, NaNs arising from division in the softmax are excluded by Assumption 5.1. However, we acknowledge that other operations (e.g., $0 + \infty$, $0 \times \infty$) may also result in NaNs and should be mentioned explicitly. We thank the reviewer for bringing this to our attention. 2. Line 140 - Clarification of the phrase, "there is no intersection to alphabet in this study": - What we intended here is the condition $\Sigma \cap \mathbb{V} = \emptyset$, meaning that the set of special tokens does not intersect with the standard alphabet. - This restriction was made for clarity: special tokens have distinct roles — e.g., <eos> as an acceptance marker, <sep> as a separator—and keeping them disjoint from the alphabet avoids ambiguity in our definitions. 3. Line 231 - Assumption 5.1 “holds for most trained transformer models” – why “most” and not “all”: - This statement refers specifically to the inner product $q(x)k(y)^\mathsf{T}$ for all token pairs $x, y$. While $q(x)$ and $k(y)$ are practically finite in real-world models, it is difficult to make a definitive claim about all possible products. - Categorically asserting this assumption would require making claims about learned parameters, and we cannot theoretically rule out the possibility that training may produce pathological configurations. - Furthermore, violations of this assumption could lead to degenerate behaviors such as attention masking or unique hard attention. In contrast to soft attention—where weights are smoothly distributed—hard attention focuses all weight on a single token. Given prior literature, we believe it is important to distinguish soft attention from such extreme cases. # Question For Authors > Does your result subsume the results on C-RASP or AC0? Or is the formalism different? - We thank the reviewer for this insightful question, which touches on the relationship between our results and prior work on formal characterizations of Transformer expressivity, particularly in the context of C-RASP and AC0. - Regarding C-RASP, we are aware that the class FOC[+; MOD] is strictly weaker than C-RASP, as discussed in [5]. As we mentioned at table 1, FOC[+; MOD] is one of the upper bound of the expressivity of fixed-precision Transformer models with Absolute PE ([7] Theorem 2, [5] Lemma 6.1, Theorem 7.1) - Therefore, while our formalism differs, we believe our focus on the weakest setting are complementary. - As for AC0, we thank the reviewer for introducing [6], which we were not previously aware of. - Their framework, CoT[T(n), d(n), s(n)], characterize expressivity based on decoding steps, embedding size, and precision. While they analyze cases like CoT[log n, poly(n), 1] $\subseteq$ AC0 (Theorem 3.1), our setting corresponds to CoT[T(n), 1, 1], which is not covered in [6]. We show that this setting has expressivity exactly FinCofin — likely strictly weaker upper bound than the classes discussed in their work. Importantly, we also provide a lower bound, which they do not. # Summary We thank the reviewer again for their insightful feedback, which has helped improve the clarity and focus of our work. We hope our responses adequately address the concerns, and we will incorporate the necessary revisions in the final version. --- Rebuttal Comment 1.1: Comment: I thank the authors for these helpful clarifications and insightful responses re [5,6] (I didn't find where the authors give the reference for "[7]" -- I trust this will be included in the next version of the paper). Overall, making these aspects clearer will improve the paper. Simultaneously, I do stand by my original assessment that > In my view, the primary weakness is that, as described under "Claims and Evidence", it is unclear whether the strong limitations (effectively precluding generalization to unbounded-length inputs) are relevant at all to real-world training of transformers. After all, one could just as well conclude that standard computers (which, grounded in the physical world, have finite RAM) can only compute finite-state languages, whereas there is a general consensus that real computers are Turing-complete for most practical purposes. Thus, in the absence of experiments or further non-asymptotic results, the relevance of the results to machine learning remain unclear. While I consider this a weakness, I do not think it necessarily precludes publication.
Summary: This paper demonstrates that fixed-precision Transformer decoders without positional encoding are limited to recognizing only finite or co-finite languages, and modest expressivity gains are made when adding positional encodings or relaxing parameter constraints. By performing these expressivity analyses in less idealized conditions, the authors address the significant gap between theoretical models and practical implementations, and ultimately suggest that real-world Transformers may be limited in their expressivity (acting effectively as lookup tables). Claims And Evidence: The main claims in the submission are supported by mathematical proofs. However, as the authors note, the relevance of these claims to real-world settings still needs to be investigated further. Methods And Evaluation Criteria: The mathematical approaches make sense for the problem at hand. Theoretical Claims: I did not carefully check the correctness of proofs and theoretical claims, but they seemed reasonable. Experimental Designs Or Analyses: The analyses appear well designed. Supplementary Material: There was no supplement. Relation To Broader Scientific Literature: Previous research established that ideal Transformers with infinite precision are Turing-complete, while those with logarithmic precision fall within specific circuit complexity classes like TC$^0$. This paper extends these findings by revealing how practical constraints limit expressivity, highlighting that the theoretical power of Transformer models may be unattainable in actual implementations due to fixed-precision limitations. Essential References Not Discussed: There are no essential references missing that I'm aware of. Other Strengths And Weaknesses: Other strengths include a novel approach to analyzing transformer expressivity and a step towards more practically relevant analysis in this domain. Other weaknesses include the fact that the proofs seem to lack some rigor, and there's no supplementary material to address this. Other Comments Or Suggestions: Please see the questions below. Questions For Authors: 1) Line 165: why mention encoder only transformers here, which are not the object you study (decoder only transformers)? 2) Is sigma the input in definition 4.2? If so, should it have a time superscript? 3) Assumption 5.1 mentions plus or minus infinity cannot be reached, but the proof of Lemma 5.3 mentions that positive infinity is reached, and it uses Assumption 5.1. This seems contradictory, maybe it's a typo in the assumption. 4) In Lemma 5.3, if “the final token’s contribution vanishes”, then why is it important to have the same final character for w and w’? 5) Line 328: I think “lower” not “upper” bound is meant. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We greatly appreciate the valuable comments provided by reviewer Je32. # Weakness: rigorous of proofs We agree with the reviewer’s concern regarding the rigor of our proofs. In the current manuscript, we prioritized clarity of explanations. However, we also acknowledge that the proofs are not sufficiently rigorous in the current form. Thus, we fully intend to include additional proof details and supplementary lemmas —particularly in Section 6— in the final version. These additions will ensure that the mathematical rigor meets the standards expected by the community. # Questions For Authors ## 1. Line 165 – Mention of Encoder-Only Transformers: > Line 165: why mention encoder only transformers here, which are not the object you study (decoder only transformers)? - Our intention in mentioning encoder-only Transformers was to highlight that even in a decoder-only setting (particularly when using NoPE), positional information is implicitly captured through causal masking [1]. The contrast with encoder-based models was intended to underscore our motivation for adopting NoPE. - We will therefore revise the text to explicitly clarify our rationale. We sincerely thank the reviewer for bringing this to our attention. ## 2. $\sigma$ in Definition 4.2: > Is sigma the input in definition 4.2? If so, should it have a time superscript? - We acknowledge that we omitted an explicit statement clarifying that $\sigma \in \Sigma$. We thank the reviewer for pointing out this oversight and will clearly specify it in the revised manuscript. - Regarding the suggestion to use a time superscript, we apologize but did not fully understand the reviewer's intent. If the reviewer was referring to a notation like $\sigma_{1:t}$, we would like to clarify that our current definition is intentionally formulated so that such notation is unnecessary, as the autoregressive nature of the definition implicitly encodes this temporal dependency. However, if we have misunderstood the reviewer's suggestion, we would greatly appreciate further clarification. ## 3. Assumption 5.1 vs. Lemma 5.3: > Assumption 5.1 mentions plus or minus infinity cannot be reached, but the proof of Lemma 5.3 mentions that positive infinity is reached, and it uses Assumption 5.1. This seems contradictory, maybe it's a typo in the assumption. - Assumption 5.1 asserts that $qK \neq \pm \mathrm{inf}$. Thus, the divergence of the sum $\sum \exp(qK)$ to positive infinity does not contradict this assumption. ## 4. Lemma 5.3 - the final token's contribution: > In Lemma 5.3, if “the final token’s contribution vanishes”, then why is it important to have the same final character for w and w’? - The phrase “the final token’s contribution vanishes” was indeed a misstatement. Our intended claim was that the contributions of all tokens *except* the final token vanish. We sincerely thank the reviewer for highlighting this error, and we will correct the statement in the revised manuscript. ## 5. Line 328 - mistake: > Line 328: I think “lower” not “upper” bound is meant. - We agree with the reviewer’s observation that “lower bound” is the appropriate term. This error will be corrected in the revised version. We appreciate the reviewer pointing this out. # Summary Overall, while the current manuscript prioritizes clarity and simplicity in its presentation, we will carefully incorporate the reviewer’s suggestions, including the mentioned clarifications and additional proof details, to enhance the rigor of our work. We sincerely thank the reviewer once again for these insightful and constructive comments, which have significantly helped us improve the quality of the paper. We hope that our responses sufficiently address the reviewer’s concerns and clarify the contributions of our work. Please let us know if any further details or clarifications would be helpful. --- Rebuttal Comment 1.1: Comment: Thank you for the helpful response! I will consider this further along with the other reviews (and their responses) to settle on a final recommendation. This is minor, but the fact that $\sigma \in \Sigma$ wasn't the confusing part. I was confused by how the transformer input could be represented as $\sigma$ concatenated with just the prior timestep's output. I would think it would have to be concatenated with all prior outputs. --- Reply to Comment 1.1.1: Comment: Thank you. Now I understand what you say. It's true that my definition doesn't allow reference to past outputs. I'll improve it later.
null
null
null
null
null
null
Flat-LoRA: Low-Rank Adaptation over a Flat Loss Landscape
Accept (poster)
Summary: The authors present an approach to improving generalization in PEFT. They argue that generalization is strongly correlated with flat minima and that existing empirical approaches are not directly applicable to PEFT. To address this, they propose Flat-LoRA, an extension of SAM that facilitates the discovery of flat minima in the entire parameter space while training within a subspace. By leveraging existing techniques, such as the relaxation of inner optimization problems using random directions and on-the-fly noise generation, the authors demonstrate that Flat-LoRA consistently improves over baseline methods, albeit with a small difference. Claims And Evidence: The proposed method optimizes using the same framework as SAM and does not provide a mathematical analysis of the algorithm's convergence. However, experimental results demonstrate a clear advantage of the method. Experiments with OOD data show that Flat-LoRA improves generalization, supporting the paper’s claim. Methods And Evaluation Criteria: The paper presents results on common datasets (GLUE, InstructEval, MT-Bench) for NLP tasks as well as small-scale qualitative T2I experiments to provide sufficient evidence for Flat-LoRA’s effectiveness. Theoretical Claims: The mathematical derivations appear correct to the best of my knowledge. Experimental Designs Or Analyses: To prove that the proposed method improves generalization, the authors analyze it in multiple experimental setups, following the widely accepted frameworks and using the same metrics and datasets. As the improvement over most of the benchmarks is minute, it can be beneficial to provide statistical analysis to justify the difference, e.g., in Tables 1 and 2. Improvement in T2I task is shown only qualitatively and requires additional analysis using common datasets (i.e., measure CLIP-T/CLIP-I on DreamBench). Supplementary Material: The provided supplementary material contains an implementation of the proposed method and, to the best of my knowledge, does not contain errors. Relation To Broader Scientific Literature: The paper references relevant prior work and discusses concurrent approaches. The proposed method complements these existing approaches and can be effectively combined with them. Essential References Not Discussed: Some recent PEFT methods are missing, including GaLore [1], CorDA [2]. [1] GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection [2] CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning Other Strengths And Weaknesses: Strengths: - Paper is well-written and easy to read Weaknesses: - Table 4 does not provide variance - Tables 1, 2, and 5 require more analysis to prove that the difference is statistically significant, as most of the improvement of Flat-LoRA is lower than the standard deviation of the metrics. - Some of the relevant baselines are mentioned but are not used in comparison (Balancedness-Aware Regularization [3], LoRA-Pro [4], GaLore [1], CorDA [2]) [3] Implicit Regularization of Sharpness-Aware Minimization for Scale-Invariant Problems [4] LoRA-Pro: Are Low-Rank Adapters Properly Optimized? Other Comments Or Suggestions: - Lines 169-171 duplicate lines 167-169. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your detailed and constructive comments. We address your concerns point by point as follows: --- **Q1:** Table 4 does not provide variance. **A1:** Following your suggestion, we rerun the experiments and report the variance as follows: | Method | MMLU | DROP | BBH| Human-Eval| | --- | --- | --- | --- | --- | | Full FT | 52.36±0.45 | 38.23±0.47 | 35.38±0.35 | 15.44±0.35| | LoRA | 51.22±0.38 | 37.26±0.63 | 34.77±0.22 | 13.01±0.93 | | Flat-LoRA | 51.88±0.55 | 38.18±0.71 | 35.22±0.26 | 15.24±0.61 | --- **Q2:** Tables 1, 2, and 5 require more analysis to prove that the difference is statistically significant, as most of the improvement of Flat-LoRA is lower than the standard deviation of the metrics. **A2:** Following your suggestion, we conduct statistical analysis using t-test as follows: (Table 1) Hypothesis: Flat-LoRA is better than LoRA T5-base | MNLI | SST-2 | CoLA | QNLI | MRPC ---|---| --- | --- | --- | --- p (r=8) | 0.186 | 0.038 | 0.043 | 0.186 | 0.027 p (r=16) | 0.623 | 0.117 | 0.023 | 0.002 | 0.046 (Table 2) Hypothesis: Flat-LoRA is better than LoRA ViT-B/32 | CIFAR-10 | CIFAR-100 | Cars | SVHN | DTD ---|---| --- | --- | --- | --- p (r=8) |0.006 | 0.008 | 0.137 | 0.125 | 0.063 | p (r=16) | 0.001 | 0.001 | 0.047 | 0.112 | 0.046 | (Table 5) Hypothesis: Flat-X is better than X, where X is base method, can be LoRA, PiSSA, etc. T5-base | LoRA | PiSSA | LoRA-GA | DoRA | AdaLoRA | LoRA+ ---|---|---|---|---|---|--- CoLA| 0.043|0.36|0.488|0.088|0.202|0.037 MRPC| 0.027 | 0.126| 0.301|0.199|0.183|0.313 We observe that, in most cases, the improvements are significant (i.e., p < 0.05). However, we acknowledge that in some cases, such as in Table 5, the improvements are relatively modest. This could be attributed to the strong baseline performance of the LoRA variants, which leaves limited room for further improvement. Moreover, we find that on large-scale training, the improvement is more significant as below, showing the scalability of flat-LoRA. (Table 3) Hypothesis: Flat-LoRA is better than LoRA Llama2-7b | MT-Bench | GSM8k | Human-Eval ---|---|---|--- p (r=8) | 0.007 | 0.003 | 0.007 --- **Q3:** Some of the relevant baselines are mentioned but are not used in comparison (Balancedness-Aware Regularization [3], LoRA-Pro [4], GaLore [1], CorDA [2]). **A3:** Thanks for providing the very relevant works. We compare them below and will add the comparision and citations in the revision. | T5-base | oBAR [3] | nBAR [3] | LoRA-Pro [4] | GaLore [1] | CorDA [2] | Flat-LoRA |---|---|---|---|---|--- |--- | MRPC | 88.58±0.35 | 88.63±0.42 | 89.23 $\pm$ 0.33 | 88.90±1.12 | 89.76±0.52 | 89.59±0.37 | CoLA | 83.07±0.87 | 82.78±0.68 | 83.17 $\pm$ 0.28 | 83.14±0.57 | 83.38±0.47 |83.61±0.38 [1] GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection, ICML'24 [2] CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning, NeurIPS'24 [3] Implicit Regularization of Sharpness-Aware Minimization for Scale-Invariant Problems, NeurIPS'24 [4] LoRA-Pro: Are Low-Rank Adapters Properly Optimized? ICLR'25 --- **Q4:** Improvement in T2I task is shown only qualitatively and requires additional analysis using common datasets (i.e., measure CLIP-T/CLIP-I on DreamBench). **A4:** Thanks for your valuable suggestions. Following your suggestion, we finetune an SDXL model under the same setting as Figure 3, using the public dataset provided by DreamBooth, including 30 tasks, where each task is evaluated with 25 prompts and 4 different seeds. The CLIP-I and CLIP-T scores are reported below, which show consistent results with Figure 3. The model with Flat-LoRA exhibits both higher subject fidelity (CLIP-I) and prompt fidelity (CLIP-T). More comparisons of sample images can be found in [Figure 3](https://anonymous.4open.science/r/Flat-LoRA/rebuttal.pdf). | | Real Image | LoRA | Flat-LoRA | | ------------------- | ---------- | ----- | --------- | | CLIP-T ($\uparrow$) | - | 0.299 | 0.311 | | CLIP-I ($\uparrow$) | 0.881 | 0.819 | 0.825 | --- **Q5:** Lines 169-171 duplicate lines 167-169. **A5:** Thanks for your careful reading. We will fixed it in the revision.
Summary: **1. Summary of Contributions**: The paper introduces **Flat-LoRA**, a novel method for low-rank adaptation (LoRA) that aims to find parameter-efficient fine-tuning solutions residing in a flatter region of the full parameter space. The authors identify that standard LoRA, while efficient, might converge to minima that appear flat within its low-dimensional optimization space but are sharp in the context of the original, full parameter space, potentially hindering generalization. To address this, Flat-LoRA employs a **Bayesian expectation loss objective with carefully designed random weight perturbations** applied to the merged weights (pre-trained weights + LoRA adaptations). This approach smooths the loss landscape in the full parameter space, encouraging convergence to flatter minima without incurring the significant computational and memory overheads associated with sharpness-aware minimization (SAM). A key contribution is a **memory-efficient perturbation generation strategy** that relies on storing random seeds and filter norms instead of full perturbation matrices. The paper demonstrates through extensive experiments across diverse tasks—including NLP, computer vision, and generative modeling—that Flat-LoRA improves both in-domain and out-of-domain generalization compared to standard LoRA and sometimes even surpasses full fine-tuning. * **Quality of Writing/Presentation**: * The paper is generally **well-written and clearly structured**. The introduction effectively sets the context and motivates the proposed approach. * The explanation of LoRA and Flat-LoRA is lucid. * The figures (Figure 1, 2, 3, 4, 5, 6) are helpful in illustrating the concepts and results. * The experimental setup and results are detailed, allowing for potential reproducibility. * The inclusion of ablation studies and comparisons with related work enhances the presentation. * The appendix provides additional experimental results, further supporting the claims. * **Minor improvements could be made by ensuring consistent levels of detail in describing the experimental setups across different tasks and models.** For instance, more specific hyperparameter choices could be consistently provided in the main text or referenced clearly in the appendix. * **Literature**: * The paper provides a comprehensive overview of related work on flat minima and generalization as well as low-rank adaptation techniques and their variants. * It appropriately cites key papers in both areas, including seminal works on flat minima (Hochreiter & Schmidhuber, 1997; Foret et al., 2020) and LoRA (Hu et al., 2022). * The discussion of the limitations of applying SAM directly to LoRA parameters (Li et al., 2024a) and the full parameter space effectively positions Flat-LoRA within the existing literature. * The comparison with recent LoRA enhancement methods (AdaLoRA, DoRA, LoRA+, PiSSA, LoRA-GA) demonstrates a strong understanding of the current state-of-the-art. **3. Pros and Cons**: * **Pros**: * **Improved Generalization**: Flat-LoRA demonstrates significant improvements in both in-domain and out-of-domain generalization across diverse tasks and models. * **Computational Efficiency**: It maintains the training efficiency of standard LoRA, avoiding the doubled training cost of SAM. * **Memory Efficiency**: The method is memory-efficient by storing only random seeds and filter norms, unlike SAM which requires storing full perturbation matrices. * **Seamless Integration**: Flat-LoRA can be easily integrated with existing LoRA variants, leading to further performance gains. * **Principled Approach**: The use of Bayesian expectation loss provides a theoretical basis for seeking flatter minima in the full parameter space. * **Empirical Validation**: Extensive experiments on various tasks and models provide strong support for the effectiveness of the proposed method. * **Scalability**: Demonstrated effectiveness on large language models like Llama-2-7B. * **Cons**: * **Hyperparameter Sensitivity**: The performance of Flat-LoRA depends on the choice of the perturbation strength (σ), and finding the optimal value might require tuning. * **Limited Theoretical Depth**: While the paper provides a clear empirical demonstration, a more in-depth theoretical analysis connecting the proposed approach directly to improved generalization in the context of LoRA could be beneficial. * **Potential Overhead (Minor)**: While minimal compared to SAM, Flat-LoRA does introduce a small additional memory overhead for storing filter norms and potentially a slight increase in computation time for generating and applying perturbations. **4. Classification of Concerns**: * The **hyperparameter sensitivity to perturbation strength (σ)** is a **minor concern**. While tuning might be required, the ablation study provides some guidance on reasonable ranges. * The **limited theoretical depth connecting the approach directly to LoRA generalization** is a **minor concern**. The strong empirical results mitigate this, but further theoretical investigation could strengthen the paper. * The **potential minor overhead in memory and computation** is also a **minor concern**, as the experiments show it to be quite small compared to the benefits achieved. **5. Overall Assessment**: The paper presents a **novel and effective approach, Flat-LoRA, for improving the generalization of Low-Rank Adaptation by optimizing for flatness in the full parameter space**. The method is well-motivated, theoretically grounded in Bayesian expected loss, and supported by extensive and convincing experimental results across a diverse range of tasks and models. The **key strengths** of the paper lie in its **significant performance improvements over standard LoRA, its computational and memory efficiency compared to sharpness-aware methods, and its ease of integration with existing LoRA techniques**. While there are minor concerns regarding hyperparameter sensitivity and theoretical depth, they do not significantly detract from the overall contribution. * **Points of agreement:** I agree with the authors' premise that the flatness of the loss landscape in the full parameter space is crucial for generalization, even when using parameter-efficient methods like LoRA. The experimental results support their claim that Flat-LoRA can lead to significant performance improvements across various tasks and models. I also agree that the memory efficiency of their proposed perturbation strategy is a significant advantage, especially for large-scale models. * **Anything learned from the target:** I learned a valuable insight into the potential limitations of solely focusing on the optimization landscape within the low-dimensional space of LoRA and the importance of considering its relationship with the full parameter space. The proposed method for efficiently approximating the effect of perturbations in the full parameter space through random seeds and filter norms is also a noteworthy technique. Claims And Evidence: Evaluation of the claims made in the Flat-LoRA paper: * **Flat-LoRA improves generalization:** The paper claims Flat-LoRA improves both in-domain and out-of-domain generalization. This is supported by experimental results across various tasks, including natural language understanding, image classification, dialogue generation, mathematical reasoning, coding abilities, and text-to-image generation. Tables 1, 2, 3, and Figure 3 show improved performance compared to standard LoRA. * **Flat-LoRA maintains computational and memory efficiency:** The paper claims Flat-LoRA integrates seamlessly with existing methods while maintaining computational and memory efficiency. Unlike SAM, it avoids additional gradient steps and remains memory-efficient by storing only the random seed and filter norms. Table 7 shows the memory and time usage, indicating minimal overhead compared to LoRA. * **Filter structure and Input dimension:** The approach considers filter structure and input dimension. The variance introduced during the forward pass by random weight perturbation is independent of the input dimension. * **Flat Minima and Generalization:** Flat minima in the loss landscape improve generalization and robustness to distribution shifts. This is well-known through various literature papers. * **Storing random seed for memory efficiency:** Storing only the seed for the random generator and filter norms allows for the re-construction of εW when needed and requires minimal memory overhead. * **Mixed precision training:** Flat-LoRA facilitates memory-efficient integration of perturbation injection during precision casting in mixed-precision training. **Potential Issues to Investigate Further:** * **Low-rank adaptation may exhibit sharper loss landscapes:** The paper states that low-rank adaptation may lead to sharper loss landscapes in the full parameter space, which Flat-LoRA mitigates. There is no theory to support it. * **Hyperparameter Sensitivity:** The paper mentions setting the random perturbation strength σ to specific values (e.g., 0.05 for T5-base, 0.15 for CLIP ViT-B/32). An ablation study on the variance magnitude was performed and the results are shown in Table C3. It may be worth checking the sensitivity of Flat-LoRA to this parameter across different tasks and model sizes. * **Scope of Improvement:** While Flat-LoRA shows consistent improvements, the magnitude of these improvements varies across different tasks and datasets. It is important to examine scenarios where Flat-LoRA's benefits are less pronounced. * **Integration Complexities:** The guidelines mentions that potential overheads or limitations should be addressed. The flat loss objective can be seamlessly integrated with previous approaches to yield consistent improvements. A clear guideline or step in that area can be helpful. Methods And Evaluation Criteria: * **Methods**: * The paper introduces **Flat-LoRA**, which optimizes low-rank adaptation within a flat region of the full parameter space. This addresses the problem of standard LoRA, which may find solutions that are sharp in the full parameter space, potentially harming generalization. * Flat-LoRA uses a **Bayesian expectation loss objective** with random weight perturbations to smooth the loss landscape, promoting convergence to flatter minima. This is a computationally efficient alternative to sharpness-aware minimization (SAM). * The method includes a **refined random perturbation generation strategy**, considering weight magnitude and model width scaling, to improve generalization performance. * The paper also introduces a method for **storing random seeds** to ensure memory efficiency. * **Evaluation Criteria**: * The paper compares Flat-LoRA with other LoRA variants, including PiSSA, LoRA-GA, DoRA, AdaLoRA and LoRA+. Flat-LoRA can be seamlessly integrated with previous approaches. * The paper evaluates Flat-LoRA on diverse tasks: **natural language understanding, image classification, dialogue generation, mathematical reasoning, coding abilities, and text-to-image generation**. * The models were evaluated based on performance metrics suitable for each task, such as **accuracy** for natural language understanding and image classification, **first-turn score with GPT-4** for chat tasks, **accuracy** for math tasks, and **PASS@1 metric** for code tasks. * The study includes **out-of-domain generalization** experiments using corrupted datasets and instruction-following benchmarks. * **Ablation studies** are conducted to assess the impact of different LoRA ranks and perturbation variance. * The **memory and time costs** of Flat-LoRA are compared to those of LoRA. * The loss landscape is visualized to demonstrate that Flat-LoRA achieves a flatter loss landscape compared to LoRA. * **Appropriateness**: * The **methods** seem appropriate for the problem. Flat-LoRA directly addresses the limitations of LoRA by seeking flatter minima in the full parameter space, which is expected to improve generalization. The use of Bayesian expectation loss and refined perturbation strategies aims to achieve this efficiently. * The **evaluation criteria** are also well-suited. The tasks cover a wide range of applications relevant to large language models and computer vision models. The use of both in-domain and out-of-domain datasets, along with ablation studies, provides a comprehensive assessment of Flat-LoRA's performance and robustness. * The evaluation includes **comparison with SAM**, showing that Flat-LoRA achieves comparable or superior performance to SAM while requiring less memory and training time. Theoretical Claims: Yes, I checked proposition 3.2 and its derivations in Equations 8 and 9. But I have not thoroughly verified Equation 9. Also, Not sure how it behaves with the dimensionality of W. Experimental Designs Or Analyses: * **Hyperparameter Tuning and Selection:** The paper mentions specific values for the random perturbation strength $\sigma$ for different models and datasets. While an ablation study on $\sigma$ is presented in Appendix C, it's focused only on CIFAR-10/100 with CLIP ViT-B/32. **The paper lacks a thorough justification for the chosen values of $\sigma$ for other tasks and models.** It's possible that the reported improvements are sensitive to this hyperparameter, and the selected values might not be optimal across all settings. A more systematic approach to hyperparameter tuning, or at least a wider exploration of $\sigma$ values across different experiments, would strengthen the validity of the results. * **Comparison with Baselines:** While Flat-LoRA is compared to standard LoRA and other LoRA variants, the depth of the comparison could be enhanced. * For instance, in the comparison with other LoRA variants (Table 5), the paper shows consistent improvements. However, **it doesn't delve into whether these improvements are statistically significant.** Authors have Reported confidence intervals but no explanation was provided around them in Section 4.6. * The comparison with SAM (Table 6) highlights the memory and time efficiency of Flat-LoRA. However, the choice of perturbation radius $\rho$ for SAM is based on a limited exploration ($\rho \in \{0.005, 0.01, 0.05, 0.1, 0.2, 0.3, 0.5\}$). It's possible that a different value of $\rho$ could lead to better performance for SAM, potentially altering the conclusions of the comparison. * The paper mentions stronger baselines achieved for Llama-2-7B compared to previous work. While this is a positive aspect, it also raises the question of **whether the relative improvements of Flat-LoRA would hold against even stronger, more recent baselines** that might have emerged since the submission of this work. * **Out-of-Domain Generalization Analysis:** The out-of-domain experiments on corrupted CIFAR-100-C and instruction following are valuable. However, the analysis could be more nuanced. * For CIFAR-100-C (Figure 4), the performance gains increase with corruption severity. While this suggests Flat-LoRA's robustness, **a statistical analysis of these gains would be beneficial to confirm their significance at different corruption levels.** * For instruction following (Table 4), the improvements on DROP and Human-Eval are highlighted as more pronounced. **A discussion on why Flat-LoRA shows a greater advantage on these specific tasks compared to MMLU and BBH could provide more insight into the method's strengths.** * **Loss Landscape Visualization:** The loss landscape visualizations (Figure 6) are qualitative. While they visually suggest a flatter landscape for Flat-LoRA, **it's important to acknowledge that these are projections along random directions in the high-dimensional parameter space.** The flatness observed in these 2D projections might not fully capture the characteristics of the loss landscape in all relevant directions. A more quantitative measure of flatness, if feasible, could complement these visualizations. * **Scope of Applicability:** While the paper demonstrates Flat-LoRA's effectiveness across various tasks, **the underlying reasons for its varying degrees of improvement across different modalities and datasets are not thoroughly investigated.** Understanding the conditions under which Flat-LoRA provides the most significant benefits would be valuable for future research and application. In conclusion, while the experimental design is broad and covers multiple aspects, a more critical perspective reveals potential limitations in the thoroughness of hyperparameter tuning, the statistical rigor of baseline comparisons, the depth of out-of-domain analysis, and the quantitative nature of the loss landscape evaluation. Addressing these points could further strengthen the claims made in the paper. Supplementary Material: Appendix A is critical. I reviewed it. Relation To Broader Scientific Literature: The key contributions of the Flat-LoRA paper are significantly related to the broader scientific literature in the field of parameter-efficient fine-tuning and the understanding of loss landscapes in deep learning. Specifically, Flat-LoRA builds upon and extends prior work in Low-Rank Adaptation (LoRA) and methods for finding flat minima. Here's a breakdown of how Flat-LoRA connects to the literature: * **Building upon Low-Rank Adaptation (LoRA):** Flat-LoRA directly addresses a potential limitation identified within the LoRA framework [Hu et al., 2022]. While LoRA efficiently reduces the number of trainable parameters by optimizing low-rank matrices, the authors of Flat-LoRA observed that **a flat minimum in the LoRA optimization space might still correspond to a sharp region in the full parameter space**. This idea is illustrated in Figure 1 and further supported by Figure 6 and the discussion in Section 3.2. Flat-LoRA aims to improve upon standard LoRA by explicitly considering the flatness of the loss landscape in the full parameter space, which is a novel perspective compared to many existing LoRA enhancements. * **Addressing Limitations of Sharpness-Aware Minimization (SAM) in the Context of LoRA:** The paper discusses Sharpness-Aware Minimization (SAM) [Foret et al., 2020], a well-established technique for improving generalization by seeking flat minima. The authors acknowledge the potential of integrating SAM with LoRA (LoRA-SAM) [Li et al., 2024a]. However, they highlight several limitations: **SAM applied to LoRA parameters only optimizes sharpness in a restricted space**, **SAM requires an additional gradient step, doubling training cost**, and **computing sharpness in the full parameter space with SAM is memory-intensive**. Flat-LoRA offers an alternative approach to achieving flat minima that aims to overcome these computational and memory overheads. * **Leveraging Bayesian Expectation Loss and Random Weight Perturbation (RWP):** Flat-LoRA's core idea of using a Bayesian expectation loss objective [Duchi et al., 2012; Bisla et al., 2022] to smooth the loss landscape and pursue flat minima is rooted in prior work. This line of research suggests that minimizing the expected loss under random weight perturbations can lead to better generalization. Flat-LoRA builds upon this by designing a **refined random perturbation generation strategy** that considers the filter structure and input dimension, differentiating it from simpler RWP methods like Gaussian Model Perturbation (GMP) [Wang & Mao, 2021] or basic random noise injection [Wu et al., 2022; Li et al., 2024b]. The memory efficiency of Flat-LoRA, achieved by storing only the random seed and filter norms, directly addresses a key concern when applying perturbation-based methods to parameter-efficient fine-tuning. * **Connection to the Broader Understanding of Flat Minima and Generalization:** The paper's motivation stems from the widely held belief that **flat minima in the loss landscape are linked to improved generalization and robustness**. Flat-LoRA contributes to this understanding by proposing and demonstrating a method to find flatter minima in the full parameter space specifically within the context of efficient fine-tuning using LoRA. The experimental results, particularly the improved out-of-domain generalization on corrupted datasets and instruction following, provide empirical support for this connection in the context of Flat-LoRA. * **Orthogonal to Other LoRA Enhancement Techniques:** The authors explicitly state that Flat-LoRA's approach of optimizing the sharpness of the loss landscape in the full parameter space is **orthogonal to other proposed methods for enhancing LoRA performance**, such as adaptive rank allocation (AdaLoRA [Zhang et al., 2023a, 2023b]), decomposition of weight updates (DoRA [Liu et al., 2024]), improved initialization (PiSSA [Meng et al., 2024]; LoRA-GA [Wang et al., 2024]), and learning rate adjustments (LoRA+ [Hayou et al., 2024]). The experiments in Section 4.6 demonstrate that Flat-LoRA can be seamlessly integrated with some of these methods (e.g., Flat-PiSSA, Flat-DoRA), leading to further performance improvements, highlighting its potential as a complementary technique. In summary, Flat-LoRA contributes to the scientific literature by: * **Identifying a limitation of standard LoRA** regarding the sharpness of the loss landscape in the full parameter space. * **Proposing an efficient alternative to SAM** for finding flat minima in the context of LoRA, leveraging Bayesian expectation loss and a refined random perturbation strategy. * **Demonstrating improved generalization** (both in-domain and out-of-domain) across a wide range of tasks and model sizes. * **Offering a memory and computationally efficient method** that can be readily integrated with existing LoRA training pipelines and is orthogonal to other LoRA enhancement techniques. Essential References Not Discussed: NA Other Strengths And Weaknesses: Highlights when comparison with SAM, following claims will help other areas of research as well: * **LoRA-SAM optimizes sharpness in a restricted space (the LoRA parameter space), which may not effectively improve generalization**. * The paper explicitly states that LoRA constrains optimization to a much lower-dimensional space. Figure 1 illustrates this by showing that a flat minima in the LoRA space (blue curve) can still exhibit sharp directions in the full parameter space (red curve). * Section 3.2 discusses applying SAM to LoRA parameters (Eqn. 2). It argues that focusing solely on this restricted space might have limitations because during inference, the LoRA adaptation is merged into the pre-trained weights. A solution good in the LoRA space might be in a sharp region of the full parameter space, potentially harming generalization. * Equation (4) approximates the weight perturbation applied to the full parameters when SAM is applied to LoRA, showing it is roughly proportional to $(\nabla_W L)A^TA$. The paper argues this implies that SAM on LoRA only optimizes sharpness along the column space spanned by $A$, which is a small subspace of the full parameter space. * **Empirical evidence is provided in Table 5 and Table 6**, which show that applying SAM constraints solely to LoRA parameters (referred to as LoRA-SAM or LoRA+SAM A,B) does not consistently lead to significant improvements in generalization on GLUE datasets. In Table 6, LoRA+SAM applied to A and B even performs worse than standard LoRA on both CoLA and MRPC. * **SAM requires an additional gradient step, doubling the training cost and rendering it impractical for large models**. * The introduction to Section 2.1, which discusses flat minima and generalization, mentions that SAM doubles the training time compared to regular training, limiting its applicability to large-scale training. * Section 3.2, when proposing Flat-LoRA, reiterates that directly applying SAM to optimize the sharpness of the full weight space doubles the training cost, which is less desirable for large models. * Table 6 explicitly compares the training time, showing that LoRA+SAM (whether applied to A,B or W) incurs 2x the training time of standard LoRA and Flat-LoRA. * **Computing sharpness in the full parameter space necessitates calculating gradients and storing perturbations for all weights, which contradicts the principles of parameter-efficient fine-tuning**. * Section 3.2 explains that computing sharpness in the full parameter space requires calculating gradients and storing perturbations for all weights, contradicting the principles of parameter-efficient fine-tuning. * When comparing with SAM applied to the full parameter space (SAM on W or LoRA+SAM W), Table 6 indicates that it requires $O(m \times n)$ additional memory to store adversarial weight perturbations, making it impractical for parameter-efficient training. * In contrast, Flat-LoRA is presented as a method that addresses these issues by employing Bayesian expectation loss with efficient random weight perturbations that can be stored as random seeds, requiring only $O(m)$ additional memory. Other Comments Or Suggestions: **Presentation in Tables**: In some tables (e.g., Table 1, 2, 3), the ± standard deviation values are provided. Ensuring consistent precision in reporting these values across all tables could improve the overall presentation. **Detailed Evaluation**: * **Novelty, Relevance, and Significance**: * **Novelty**: The idea of explicitly optimizing for flatness in the full parameter space within the LoRA framework, using a Bayesian expectation loss with a memory-efficient random perturbation strategy, is slightly **novel** but not unique. * **Relevance**: Parameter-efficient fine-tuning (PEFT) methods like LoRA are highly relevant due to the prohibitive costs of fine-tuning large-scale pre-trained models. Improving the generalization capabilities of these methods is crucial for their broader applicability. The problem of sharp minima in the context of low-rank adaptation is a pertinent issue, making Flat-LoRA's approach highly relevant to the PEFT research community. * **Significance**: The experimental results demonstrating **consistent improvements in both in-domain and out-of-domain generalization across a wide range of tasks and models** highlight the **significance** of Flat-LoRA. The fact that Flat-LoRA achieves these gains with minimal additional computational and memory overhead compared to standard LoRA makes it a practically significant contribution. The ability to integrate Flat-LoRA with other LoRA variants and yield further improvements also underscores its significance. * **Soundness**: * The paper provides a clear formulation of the Flat-LoRA objective based on Bayesian expected loss. * The motivation for considering the full parameter space's loss landscape is well-illustrated and argued. * The proposed random weight perturbation generation strategy is theoretically justified, with Proposition 3.2 demonstrating its input-dimension independence. * The memory efficiency argument, based on storing seeds and filter norms, is sound. * The **extensive experimental validation across diverse tasks (natural language understanding, image classification, dialogue generation, mathematical reasoning, coding abilities, and text-to-image generation) and model sizes (T5-base, CLIP ViT-B/32, Llama-2-7B, SDXL) provides strong empirical evidence** for the effectiveness of Flat-LoRA. * Ablation studies on LoRA rank and perturbation variance further support the design choices of Flat-LoRA. * The comparison with SAM and other LoRA variants helps to contextualize Flat-LoRA's performance and efficiency. * The visualization of the loss landscape in the full parameter space offers qualitative evidence of Flat-LoRA's ability to find flatter minima. * **One potential area for further strengthening soundness could be a more theoretical analysis of why optimizing the Bayesian expected loss in the full parameter space with the proposed perturbation strategy leads to better generalization in LoRA fine-tuning.** While Lemma 3.1 provides some insight into the smoothing effect, a more direct connection to the generalization benefits in the context of LoRA could be valuable. Questions For Authors: 1. Hyperparameter Sensitivity of $\sigma$: The paper includes an ablation study on the perturbation strength $\sigma$ in Appendix C, showing optimal ranges for CIFAR-10 and CIFAR-100. However, could the authors provide more general guidance or intuition on how to select an appropriate value for $\sigma$ when applying Flat-LoRA to new tasks or models where such ablation studies might be computationally prohibitive? For example, are there any heuristics based on learning rate, batch size, model size, or dataset characteristics that could inform the choice of $\sigma$? If the authors can offer some practical guidelines, it would significantly increase the usability of Flat-LoRA and positively impact my evaluation by addressing a key practical concern. Conversely, if the optimal $\sigma$ is highly task-specific without clear indicators for its selection, it would remain a potential limitation. 2. Underlying Reasons for "Flat-LoRA (all)" Performance: Appendix A explores extending the random weight perturbation to all layers. The results show improvements, but they are less pronounced than when applying Flat-LoRA to linear layers only. Could the authors offer more insight into why perturbing all layers (including layernorm, biases, embeddings, etc.) does not yield the same level of performance gain as perturbing only the linear layers in the LoRA modules? Understanding the reasons behind this could provide valuable insights into where the flatness of the loss landscape is most critical for generalization in the context of LoRA and could guide future research directions. A clear explanation would enhance the understanding of Flat-LoRA's mechanism and potentially improve my evaluation of the paper's depth. Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your detailed and constructive comments. We address your concerns point by point as follows: --- **Q1:** The performance of Flat-LoRA depends on the choice of the perturbation strength ($\sigma$), and finding the optimal value might require tuning. **A1:** We fully understand your concern. In practice, we suggest $\sigma=0.05/0.10$. Our paper includes different tasks and networks (e.g., ViT, T5, LLama, SDXL) and shows that $\sigma=0.05/0.10$ leads to consistent generalization improvements. To further address this issue, especially in relation to different sizes of neural networks, we propose an improved perturbation generalization scheme that employs a scaling factor to make $\sigma$ independent of the network width (Proposition 3.2). To validate this approach, we conducted experiments on the larger ViT-L/14 model. The results demonstrate that the optimal $\sigma$ can be transferred from ViT-B/32 to ViT-L/14, and in all scenarios, the performance surpasses that of LoRA. We will include these discussions in the revision. CIFAR-100| LoRA | $\sigma=0.01$ | $\sigma=0.05$ | $\sigma=0.10$ | $\sigma=0.15$ | $\sigma=0.20$ | ---|---|---|---|---|---|--- ViT-B/32 | 87.74±0.13 | 88.14±0.22 | 88.37±0.41 | 88.65±0.35 | 88.64±0.23 | 88.06±0.31 ViT-L/14 | 92.13±0.17 | 92.33±0.07 | 92.63±0.11 | 93.11±0.13 | 92.98±0.21 | 92.46±0.03 --- **Q2:** While the paper provides a clear empirical demonstration, a more in-depth theoretical analysis connecting the proposed approach directly to improved generalization in the context of LoRA could be beneficial. **A2:** Thanks for your suggestion. Understanding the training dynamics and generalization of LoRA remains an open question. The discussions on this topic are active and often rely on assumptions like the NTK regime [1] or rank-1 perturbation [2]. Flat-LoRA additionally introduces random perturbations and raises more challenges to deal with randomness. Overall, exploring the generalization properties of Flat-LoRA is indeed interesting but quite hard. We would like to leave this for future work. [1] LoRA Training in the NTK Regime has No Spurious Local Minima, ICML'24 [2] Gradient dynamics for low-rank fine-tuning beyond kernels, arxiv --- **Q3:** While minimal compared to SAM, Flat-LoRA does introduce a small additional memory overhead for storing filter norms and potentially a slight increase in computation time for generating and applying perturbations. **A3:** Flat-LoRA indeed introduces extra memory for storing random seeds and extra time for generating random perturbations. However, this additional overhead is minimal compared to the total training cost—for example, it adds only 0.5% extra memory and 2.4% extra training time on LLama fine-tuning, which is negligible.
Summary: This paper proposes Flat-LoRA, a novel approach to improving Low-Rank Adaptation (LoRA) by incorporating a Bayesian expectation loss objective and random weight perturbations to encourage flatter minima in the full parameter space, all while maintaining computational efficiency. Claims And Evidence: The paper’s claims are well-supported by empirical results, demonstrating improved robustness and efficiency across multiple tasks. Methods And Evaluation Criteria: The authors conducted experiments on datasets spanning multiple domains, including Natural Language Understanding, Image Classification, Large Language Model, and Text-to-Image Generation. These benchmarks are well-matched to the research scope, making them appropriate for evaluating generalization. Theoretical Claims: n/a – The paper does not include formal theoretical proofs beyond conceptual justifications. Experimental Designs Or Analyses: I have reviewed all the experiments. The experimental design is well-structured and covers various domains and tasks. However, an additional ablation study on perturbation strength ($\sigma$) in larger models would help assess the generalizability of the approach. Supplementary Material: I have reviewed all supplementary materials. Relation To Broader Scientific Literature: The paper provides a simple yet effective method for improving LoRA training by leveraging perturbation-based optimization to encourage flatter minima. Essential References Not Discussed: The paper adequately discusses relevant related works. Other Strengths And Weaknesses: ##### Strengths: - The paper is well-structured and clearly written, making it easy to follow. - Flat-LoRA provides a lightweight modification to LoRA training, leveraging random weight perturbations to encourage flatter minima. - The experimental analysis is comprehensive, covering various benchmarks and ablation studies, demonstrating its robustness and broad applicability. ##### Weaknesses: - The method can be viewed as a computationally efficient alternative to SAM rather than an entirely novel approach. Specifically, it replaces min-max optimization with Bayesian expected loss, making the core idea an approximate acceleration of SAM. Thus, the performance improvements are incremental rather than groundbreaking. - The method is sensitive to the variance magnitude ($\sigma$). Table C3 shows that when $\sigma=0.2$, Flat-LoRA performs poorly on CIFAR datasets, suggesting a strong dependence on precise hyperparameter tuning. Additionally, even with optimal hyperparameters, the overall gains remain limited. Other Comments Or Suggestions: See the Questions for Authors section. Questions For Authors: Q1. The paper states: "Applying SAM directly to A, B shows no significant improvement over vanilla LoRA. In contrast, Flat-LoRA achieves comparable or superior performance to SAM." However, Flat-LoRA appears to be a simplified version of SAM, as it simplifies min-max optimization into Bayesian expected loss minimization. From a theoretical perspective, SAM should achieve superior performance given the same computational budget. Can you clarify why Flat-LoRA outperforms LoRA+SAM and whether this is due to better stability, improved optimization, or simply reduced computational overhead? It would be beneficial to add experiments comparing SAM and its variants with hyperparameter tuning to ensure fair comparisons. **I consider this the most critical issue—if well-addressed, I would be inclined to support the acceptance of this paper.** Q2. Could you provide ablation studies on the impact of variance magnitude ($\sigma$) in large language models (Section 4.3)? Since $\sigma$ plays a crucial role in the method’s effectiveness, further analysis on scalability and adaptability in larger models would better demonstrate Flat-LoRA’s robustness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your detailed and constructive comments. We address your concerns point by point as follows: --- **Q1:** The method can be viewed as a computationally efficient alternative to SAM rather than an entirely novel approach. **A1:** SAM has been proven to be an effective training strategy for improving generalization capability. However, its extension to large models has not been widely used due to the doubled computational cost and additional memory overhead, especially in parameter-efficient finetuning scenarios (e.g., LoRA). Unlike LoRA-SAM, which naively applies SAM to LoRA, our Flat-LoRA provides a simple and easy-to-implement solution that halves the computational cost and searches for a broader flat solution. As a result, we for the first time enable practical sharpness-aware optimization in the broader full parameter space for the LLM fine-tuning task with little extra memory and computational cost. --- **Q2:** The method is sensitive to the variance magnitude ($\sigma$). The overall gains remain limited. **A2:** We fully understand your concern. But we would like to point out that the perturbations given by $\sigma=0.2$ are generally too large. Instead, we suggest $\sigma=0.05/0.10$. Our paper includes different tasks and networks (e.g., ViT, T5, LLama, SDXL) and shows that $\sigma=0.05/0.10$ leads to consistent generalization improvements. To further address this issue, especially in relation to different sizes of neural networks, we propose an improved perturbation generalization scheme that employs a scaling factor to make $\sigma$ **independent of the network width** (Proposition 3.2). To validate this, we conduct experiments on the larger ViT-L/14 model. The results demonstrate that the optimal $\sigma$ can be transferred from ViT-B/32 to ViT-L/14, and in all scenarios, the performance of Flat-LoRA surpasses that of LoRA. We will include these in the revision. CIFAR-100| LoRA | $\sigma=0.01$ | $\sigma=0.05$ | $\sigma=0.10$ | $\sigma=0.15$ | $\sigma=0.20$ | ---|---|---|---|---|---|--- ViT-B/32 | 87.74±0.13 | 88.14±0.22 | 88.37±0.41 | 88.65±0.35 | 88.64±0.23 | 88.06±0.31 ViT-L/14 | 92.13±0.17 | 92.33±0.07 | 92.63±0.11 | 93.11±0.13 | 92.98±0.21 | 92.46±0.03 The limited improvements may be due to our use of a stronger training protocol compared to related works (e.g., LoRA-GA [1]), leaving less room for enhancement. Moreover, Flat-LoRA is a practical plug-in method with minimal extra cost, and its improvements are more significant on larger LLMs. [1] LoRA-GA: Low-Rank Adaptation with Gradient Approximation, NeurIPS'24 --- **Q3:** Clarify why Flat-LoRA outperforms LoRA+SAM. Add experiments comparing SAM and its variants with hyperparameter tuning to ensure fair comparisons. **A3:** Thanks for your insightful question. The advantages of Flat-LoRA over LoRA+SAM are twofold: (1) SAM and variants require twice the training time, which is impractical for large-scale models. Flat-LoRA addresses this limitation, making it more practical for real-world applications. (2) LoRA-SAM is to pursue flatness on the LoRA parameters, which only optimize a subspace of sharpness. Flat-LoRA uses perturbation on the full parameters and thus is capable of optimizing the sharpness of a broader space. Specifically for the accuracy improvement, we think the direct reason is (2). To better clarify the above discussions, we followed your suggestion and conducted a more detailed search for the perturbation radius $\rho$ across $\\{0.001, 0.003, 0.005, 0.01, 0.05, 0.10, 0.20, 0.50, 1, 1.5, 2\\}$ for SAM and its variants. The results could be seen in the following table. T5-base|Flat space|MRPC |CoLA ---|---|---|--- LoRA|-| 88.56±0.37 | 82.87±0.59 LoRA+SAM ($\rho=0.003$)|A,B|88.98±0.22 | 83.31±0.48 LoRA+GSAM ($\rho=0.003$)|A,B|89.03±0.36 | 83.11±0.17 LoRA+ASAM ($\rho=0.05$)|A,B|89.12±0.44 | 83.23±0.44 Flat-LoRA ($\sigma=0.05$)|W| 89.59±0.37 | 83.61±0.38 For LoRA+SAM, the best performance is obtained by setting $\rho=0.003$, which is significantly smaller than its typical value used for training full parameters (usually $\rho=0.1$). Such a small perturbation makes the difference between LoRA and LoRA-SAM not significant. Frankly, we did not expect that the suitable value for $\rho$ is very small, so $\rho=0.003$ is out of the range of previous hyperparameter-tuning. We will update the result of LoRA-SAMs. **Q4:** Provide ablation studies on the impact of variance magnitude ($\sigma$) in large language models. **A4:** We test different $\sigma$ on LLama-2-7b/13b as below. We observe that $\sigma=0.05/0.10$ is a good choice for both 7b and 13b models. We will add this in the revision. GMS8k | LoRA | $\sigma=0.01$ | $\sigma=0.05$ | $\sigma=0.10$ | $\sigma=0.15$ | $\sigma=0.20$ --- | -- | --- | ---| --- | --- |--- LLama-2-7b | 57.47±0.45 | 58.35±0.42 | 60.65±0.63|60.56±0.48 | 60.08±0.76 | 58.50±0.85 LLama-2-13b | 66.76±0.23| 67.02±0.67 | 67.75±0.70| 68.11±0.53 | 67.66±0.97 | 67.34±1.17 --- Rebuttal Comment 1.1: Comment: Thank you for the authors’ detailed and thoughtful response! The experimental analysis is comprehensive, and the proposed method demonstrates consistent performance improvements. I also appreciate the analysis regarding the differences between LoRA+SAM and Flat-LoRA, which helped clarify the distinction. Compared to directly fine-tuning with SAM, the proposed approach achieves better performance with reduced training time—a practical advantage, although I personally do not find “twice the training time” to be prohibitive for fine-tuning large-scale models. I still have some concerns regarding the novelty of the method. The core idea—adding perturbations to LoRA weights during optimization to encourage flatter minima—while effective, feels relatively incremental. Moreover, although the performance improvements are consistent, they remain modest. Taking these points into account, I have decided to maintain my Weak Accept score. --- Reply to Comment 1.1.1: Comment: Thanks for your valuable feedback! We are glad that our response has helped address your concerns regarding the distinction between LoRA+SAM and Flat-LoRA. As you mentioned, Flat-LoRA is a simple yet effective approach, and we think it effectively addresses the bottlenecks associated with applying SAM to fine-tuning large models (e.g., time, memory, and computation). Furthermore, it highlights the differences between flatness in the "low-rank" space versus the actual flatness observed when merged with the frozen pre-trained weights. We are glad to see that, compared to LoRA+SAM, Flat-LoRA achieves efficiency without compromising accuracy and even delivers better performance. We hope this work can serve as a new paradigm for fine-tuning large models due to its simplicity and effectiveness.
Summary: The paper proposes “Flat-LoRA,” a parameter-efficient fine-tuning method that adds random weight perturbations (with an intelligent weight dependent scaling), in order to achieve flatter minima and improve generalization. Experimental results on both vision (CLIP and Stable Diffusion) and language (T5, Llama-2) tasks consistently show that Flat-LoRA outperforms baseline LoRA variants, with particular gains in low-data and out-of-domain scenarios. Claims And Evidence: Two main claims are supported: * Flat-LoRA yields flatter solutions in the full parameter space rather than just in the low-rank subspace: The authors visualize the loss landscapes (Figure 6) and demonstrate flatter minima compared to standard LoRA. * Flat-LoRA significantly improves performance across multiple tasks (e.g., GLUE, text-to-image, code generation) and robustness: Comprehensive experimental results show gains over standard LoRA and other PEFT improvements. One is not: * Flat-LoRA generalizes better: While the flatter loss landscape and improved performance can be indicative of better generalization, the paper does not explicitly show train-vs-test loss curves or a generalization gap. Hence, it is unclear whether the gains are from genuinely improved generalization rather than lower training loss or more favorable optimization dynamics. Methods And Evaluation Criteria: The paper evaluates performance on standard benchmarks (GLUE, text/image tasks, code generation, etc.), making comparisons against strong LoRA baselines and other variants. This breadth of experiments aligns well with the parameter-efficient fine-tuning literature. Theoretical Claims: The authors argue that smoothing the loss in the full weight space leads to broader basins and better generalization. While the Bayesian expectation loss formulation is standard, the paper’s main theoretical link is that random perturbation akin to filter-wise noise effectively regularizes the final solution. Experimental Designs Or Analyses: Experiments are structured clearly, with appropriate baselines and repeated trials. The paper carefully examines both standard benchmarks and out-of-domain shifts (e.g., CIFAR-100-C corruption, instruction-following tasks). One limitation: results compare final performance but do not show training curves or highlight differences in train versus test loss. Explicit train–test curves or a direct measure of the generalization gap would strengthen the evidence that improved performance primarily stems from better generalization. Supplementary Material: It contains a few additional results. Relation To Broader Scientific Literature: Flat-LoRA extends the line of work on PEFT techniques by bringing insights from random perturbation based regularization and sharpness-aware minimization to improve generalization. There is not much that is particular to LoRA or PEFT beyond the observation that it can also be prone to sharp minima. Essential References Not Discussed: No crucial references appear missing. Other Strengths And Weaknesses: Strengths: The paper raises an interest distinction which can motivate future works: flatness in the “low-rank” space versus actual flatness once merged with the frozen pre-trained weights. It demonstrates strong empirical results on diverse tasks, with small memory/time overhead. Weaknesses: The paper does not explicitly demonstrate that performance gains arise from better train–test generalization. The limitations of the theoretical treatment in Section 3.2 could be more clearly stated. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your detailed and constructive comments. We address your concerns point by point as follows: --- **Q1:** The paper does not explicitly show train-vs-test loss curves or a generalization gap. Hence, it is unclear whether the gains are from genuinely improved generalization rather than lower training loss or more favorable optimization dynamics. **A1:** We plot the train-vs-test loss curves and generalization gap on CIFAR-100 and MRPC datasets in [Figure 1](https://anonymous.4open.science/r/Flat-LoRA/rebuttal.pdf) and present the final training-vs-test loss and generalization gap below. The results show Flat-LoRA exhibits slightly higher training loss than LoRA, with a smaller generalization gap between training and test accuracies. Thus, we conclude that the gains of Flat-LoRA are not due to lower training loss but due to better optimization that confers better generalization. CIFAR-100 | Final train loss | Final test loss| Generalization gap (%) ---|---|---|--- LoRA|0.02±0.01|0.48±0.01|11.65±0.03 Flat-LoRA|0.05±0.02|0.46±0.00|9.92±0.08 MRPC | Final train loss | Final test loss| Generalization gap (%) ---|---|---|--- LoRA|0.03±0.03 | 0.25±0.00 | 10.96±0.18 Flat-LoRA|0.04±0.03|0.20±0.01 | 9.50±0.22 --- **Q2:** The limitations of the theoretical treatment in Section 3.2 could be more clearly stated. **A2:** Thank you for this valuable feedback. We propose to make the following changes in the revision to more clearly state the limitation of LoRA-SAM: 1. We will give a clearer and more rigorous derivation on the actual perturbation of LoRA-SAM in the full parameter space. 2. We will include an experiment to validate the approximation $\varepsilon_W \approx \varepsilon_B A=c(\nabla_W L)A^\top A$ (Eqn. (4)) by showing $\frac{\|\varepsilon_BA\|}{\|\varepsilon_W\|}>0.95$ throughout the training. The demonstration experiment can be found at [Figure 2](https://anonymous.4open.science/r/Flat-LoRA/rebuttal.pdf).
null
null
null
null
null
null
CellFlux: Simulating Cellular Morphology Changes via Flow Matching
Accept (poster)
Summary: This paper proposes CellFlow, a generative model for cell microscopy images in the presence of chemical and/or genetic perturbations. In contrast to existing methods that tackle this problem, CellFlow can explicitly take into account batch effects by learning a distribution-level mapping between unperturbed (control) images and perturbed images within the same experimental batch. This is achieved by using flow matching as a tool for unsupervised image translation. The experimental results demonstrate the importance of taking batch effects into account, and show improved image quality and generalization across the different perturbations in the data. ## update after rebuttal Recommendation increased during rebuttal phase. Claims And Evidence: The claims made are as follows: 1. CellFlow models distribution-wise mappings from unperturbed to perturbed images and consequently distinguishes perturbation effects from batch effects. 2. CellFlow features improved image quality on three different genetic and/or chemical perturbation datasets over two baseline models. 3. CellFlow generalizes to perturbations not seen during training. 4. CellFlow enables continuous interpolation between unperturbed and perturbed cellular states, offering a means to study the perturbation dynamics. Overall, all claims are supported by experimental results. However, see also the comments in the experimental design or analyses section of the review. Methods And Evaluation Criteria: The datasets and methods used appear suitable for the problem statement of this paper. In particular, flow matching's capabilities to map arbitrary source distributions to the target distribution aligns well with the paper's goal of generating perturbed cellular morphologies, starting from unperturbed cells of the same experimental batch. Theoretical Claims: I read the proof of proposition 1 in Appendix A and did not notice any issues, but I did not thoroughly check its correctness. Experimental Designs Or Analyses: 1. I assume FID and KID scores are calculated using a model trained on ImageNet. Can we be sure that those scores are meaningful in the context of cell images, as such images are not aligned with images of natural scenes on which the model was trained? 2. It is claimed that CellFlow provides a potential means to study perturbation trajectories. However, from 4.4 it remains unclear whether or not these trajectories correspond to something that is biologically meaningful. For example, in the top row of Fig. 4a, it seems that the model has learned to slowly dim the pixel values that are relatively far from the nucleus, which makes sense since this would roughly correspond to the shortest path between the unperturbed and perturbed images in pixel space. However, this is not necessarily a biologically plausible path. 3. In addition to ML-enabled metrics like FID score and MoA, it could be beneficial to also add biologically relevant metrics (e.g. nucleus size) that describe the morphology of the cell, and see whether these match between the generated and true perturbed cell images. Supplementary Material: I did not review the supplemental material in-depth. I read the proof in Appendix A but did not thoroughly verify its correctness. Relation To Broader Scientific Literature: The paper uses the image-to-image translation capabilities of flow matching (earlier explored in e.g. [1] and [2]) to solve the problem of mapping the distribution of unperturbed cell images to the distribution of perturbed cell images. This approach is shown to improve over baseline models that attempt to model the effect of perturbations on cell morphology. So, although both the method and problem are not novel by themselves, to my knowledge this is (1) the first application of flow matching to the problem, and (2) the technology enables taking batch effects into account, which prior methods for the application could not, and the results demonstrate the benefits of doing so. [1] Improving and generalizing flow-based generative models with minibatch optimal transport. https://arxiv.org/abs/2302.00482 [2] Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow. https://arxiv.org/abs/2209.03003 Essential References Not Discussed: Classifier-Free guidance for Flow Matching has been introduced in [3], and it would be good to add the citation in Section 3.3. [3] Guided Flows for Generative Modeling and Decision Making. https://arxiv.org/abs/2311.13443 Other Strengths And Weaknesses: Strengths: 1. The paper addresses a relevant application and is well-written. 2. See the points under 'methods and evaluation criteria' Weaknesses: 1. See the comments under experimental design and analyses for potential weaknesses. Other Comments Or Suggestions: N/A Questions For Authors: Please respond to the comments under experimental design or analyses in particular. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer L3Zf for their thoughtful comments and for recognizing the paper’s well-supported claims, appropriate methodology and evaluation, sound theoretical foundation, strong experimental results, and relevance to the cell biology domain. We address their comments below: --- > **Metric validity**: *Are FID/KID meaningful for cell images?* We agree that FID/KID were originally designed for natural images, but they remain **standard and valid metrics** in cell morphology prediction. They are used by all six existing baselines (e.g., IMPA, PhenDiff; see Table 5 in paper's Appendix). Our qualitative observations during development also confirmed that FID/KID effectively reflects image quality and guided model improvement. Furthermore, to ensure biological relevance, we **supplement FID/KID with mode-of-action (MoA) prediction**, which directly evaluates whether generated images preserve meaningful biological signals. As suggested by Reviewer Yg4R, we now include **MoA Accuracy, MoA Macro-F1, and MoA Weighted-F1** across in-distribution (Table 2a in paper), out-of-distribution (Table 2b in paper), and batch effect correction settings (Table 2d in paper). All these metrics demonstrate that CellFlow consistently and significantly outperforms all baselines. |Table 2a (in-distribution evaluation)|FID/KID|MoA Acc|MoA Macro-F1|MoA Weighted-F1| |---|---|---|---|---| |Groundtruth Image|Reported in paper|72.4|69.7|72.1| |PhenDiff|…|52.6|33.6|52.1| |IMPA|…|63.7|40.2|64.8| |**CellFlow**|**…**|**71.2**|**49.0**|**70.7**| |Table 2b (out-of-distribution evaluation)|FID/KID|MoA Acc|MoA Macro-F1|MoA Weighted-F1| |---|---|---|---|---| |Groundtruth Image|Reported in paper|88.0|85.0|88.0| |PhenDiff|…|9.6|9.3|7.4| |IMPA|…|16.0|10.0|13.1| |**CellFlow**|**…**|**43.2**|**36.6**|**42.8**| |Table 2d (batch effect correction)|FID/KID|MoA Acc|MoA Macro-F1|MoA Weighted-F1| |---|---|---|---|---| |CellFlow w/ Other Batch Init|Reported in paper|48.2|32.9|48.4| |**CellFlow**|**…**|**71.2**|**49.0**|**70.7**| This improvement stems from our **core contribution**—**new problem formulation and solution method.** We propose modeling this task as a **distribution-to-distribution generation problem**, rather than the standard noise-to-distribution or single-to-single image prediction. This is a **conceptual shift** that aligns better with how perturbations affect heterogeneous cell populations, and allows for novel capabilities like batch effect correction and perturbation interpolation. Flow matching provides a principled and efficient tool to solve this reframed problem. While biological impact is not directly tested in this paper, CellFlow’s **strong performance and new capabilities** open the door to applications in **drug response prediction**, drug discovery, and personalized medicine in future biological studies. --- > **Interpolation plausibility**: *Do interpolations correspond to biologically meaningful transitions?* Thank you for your question. We emphasize that interpolation is a **novel and unique capability** of CellFlow, not supported by existing computational tools. Verifying intermediate cell state transition is inherently difficult, as current biotechnologies do not capture large-scale video-like morphological changes. Despite this limitation, domain experts have highlighted the potential verification of this feature—for example, in modeling dose-response curves (e.g., predicting medium-dose effects by interpolating high/low doses) or time-course dynamics (e.g., estimating 36h outcomes by interpolating 24h and 48h measurements). While our paper focuses on the **methodological foundation**, exploring these applications through biological validation is a promising direction for future work. --- > **Biological metrics**: *Could features like nucleus size be evaluated?* Thank you for the suggestion. We extracted **CellProfiler features related to nuclear size** under three perturbations known to enlarge nuclei (taxol, vincristine, and demecolcine) using the BBBC021 dataset. As shown in the table below (mean and 95% confidence interval reported), CellFlow most closely matches the real perturbed morphology in terms of nuclear size. ||**taxol**|**vincristine**|**demecolcine**| |---|---|---|---| |**Control** |1612.0 ± 39.5|1612.0 ± 39.5|1612.0 ± 39.5| |**Target**|2296.7 ± 190.1|2365.5 ± 125.5|2311.0 ± 136.1| |**PhenDiff**|1755.9 ± 138.2|1947.8 ± 70.8|2118.8 ± 102.7| |**IMPA**|2088.3 ± 190.8|2116.9 ± 107.1|2386.5 ± 123.9| |**CellFlow**|**2141.0 ± 166.6**|**2276.4 ± 115.6**|**2323.8 ± 121.9**| --- > **Reference addition**: *Please consider citing “Guided Flows for Generative Modeling and Decision Making”, which introduces classifier-free guidance in flow matching* Thank you for the suggestion. We will include this reference in the final version. --- Thank you again for your detailed feedback! We will include all of them in the revised paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clear rebuttal. Please find my response below: **Metric validity:** Your argument makes sense, especially given that the metrics correlate well with MoA-based metrics as well as the biological metric (nucleus size) provided in the rebuttal. **Interpolation plausibility:** I am not convinced about the biological meaningfulness of the generated interpolations. By construction, I would expect that a flow-matching based method would roughly learn the shortest path in pixel space, and not necessarily a biologically meaningful path. This seems also to be the case in Figure 4, where pixels further from the nucleus are slowly dimmed, instead of e.g. the cell shrinking by retracting the membrane (but keeping well-lit pixels within the membrane). Now, I am not a cell biologist, so I cannot provide expert comments on this particular example and might be wrong for this case. Still, it seems that the claim that the method has the potential to generate interpolations that correspond to dynamic, biologically meaningful perturbation responses is unlikely in general, or at least not supported by evidence in the paper. **Biological metrics:** Thank you for providing these results, they are convincing and in line with the expectation from the qualitative results in the paper. **Concluding remarks of response:** On the one hand, the paper is a well-executed application paper, focusing on the task of predicting cell morphology changes induced by perturbations. Further, it effectively leverages the capabilities of flow matching to enable new use-cases (controlling for batch effects). On the other hand, I am not convinced about the claim that the generated interpolations can represent morphological change trajectories that are biologically meaningful, and reviewer Z8BU has expressed a similar concern in their review. Can the authors at least include (in their paper and final response) a clear overview of the limitations of this approach, and state in which conditions these interpolations follow a biologically meaningful manifold instead of "simply" interpolating in pixel-space? --- Reply to Comment 1.1.1: Comment: Dear Reviewer L3Zf, We’re glad our updated results have addressed your concerns. Your suggestions have improved the quality of our work, and we will ensure all of them are included in the revised paper. Below, we provide a detailed response to the concern about **biological plausibility of interpolated cell states.** --- **1. Limitations of Interpolation** We **fully acknowledge** that verifying whether interpolated cellular morphologies reflect biologically meaningful transitions—rather than simple pixel-space blending—is an open question. **In the submitted paper, we have already softened the language, describing interpolation as a potential capability rather than a validated outcome:** > Abstract: …CellFlow enables continuous interpolation between cellular states, providing **a potential tool** for studying perturbation dynamics… **In the final version, we will further add this sentence explicitly to the limitations section:** > Limitations: …While our method enables interpolation between cell states, we acknowledge that the biological validity of these interpolations remains unverified; establishing their plausibility will require future work involving ground-truth data and experimental validation… --- **2. Our Core Contribution** That said, we would like to re-emphasize the core contribution of this work. **Our key contribution is not the interpolation itself, but a new problem formulation: modeling cell morphology changes as distribution-to-distribution generation**. This better captures population-level effects of perturbations, and flow matching offers a principled solution for this setting. **Interpolation is an emergent capability, not the central claim of the paper.** --- **3. Opportunities for Biological Validation** We agree with the reviewer that validating the biological relevance of interpolated states is a key next step. While **existing datasets lack ground truth for intermediate states**, future work could explore: - **Dose interpolation**: Some datasets include images under multiple dosages. We can test whether **an interpolation from control to high dose passes through realistic medium-dose morphologies**. - **Timepoint interpolation**: For datasets with multiple timepoints (e.g., day 0, day 5, day 10), we can evaluate whether **interpolated images from control to day 10 recover morphology consistent with day 5**. - **Drug perturbation interpolation**: Current datasets rarely include fine-grained trajectories post-treatment. **Validating such interpolations could collect new wet-lab data, such as live-cell imaging.** --- **4. Toward More Plausible Interpolations** We appreciate your point that flow matching may yield “shortest path in pixel space” trajectories. To address this in future work, we plan to explore: - **Interpolating in latent space**: We can interpolate in latent space (e.g., via an autoencoder) rather than pixel space. This may help ensure trajectories follow a more structured biological manifold. - **Adding supervision from intermediate states**: In datasets with known intermediate points (e.g., medium dose), we can train the model to explicitly pass through those states during interpolation. - **Adding constraints**: Interpolation could be guided with additional constraints. For example, adding a GAN loss can encourage interpolated images to look like real cell images. --- **5. Context from Related Work** Latent interpolation is common in generative modeling and has been explored in biological settings using PCA or VAE-based models [1,2]. However, **few of these have been experimentally validated**, largely due to the lack of dynamic cell imaging data. Verifying such interpolations requires costly wet-lab experiments and the community’s joint effort to develop appropriate datasets and protocols. [1] Integrated intracellular organization and its variations in human iPS cells (Nature 2023) [2] Orientation-invariant autoencoders learn robust representations for shape profiling of cells and organelles (Nature Comm 2024) --- **6. Summary** In summary, we agree that the biological plausibility of interpolated trajectories is an important and open question—but one that is beyond the scope of this method-centric paper. **We hope the reviewer can recognize that CellFlow makes a substantive methodological contribution through its novel problem formulation and strong empirical performance. The interpolation capability, while not the focus, emerges naturally from our framework and is, to our knowledge, the first demonstration of such a capability in cell perturbation modeling. Biological validation of interpolations is better suited for a follow-up, biology-focused paper involving wet-lab experiments.** We will add a dedicated discussion in the appendix outlining future directions for biological validation. --- --- **Update** Thank you again for your time and constructive feedback, as well as for improving the evaluation of our work!
Summary: This work uses a flow-based conditional generative model applied to cellular imaging, with the goal of synthetically generate, given a reference control cell and a perturbation (either chemical, genetic, or both), a novel cell image illustrating the effects of the perturbation. Cellular morphology prediction is cast as a flow problem, transforming an initial data distribution in to a final data distribution. While this task is achieved with off-the-shelf conditional flow matching models, the authors bring up a fundamental problem, related to the destructive nature of cell imaging: training data from the initial and final distributions are *not* coupled. Lack of coupling is side-stepped by observing that sampling initial cells from the same batch is sufficient to ensure proper training. Moreover, the authors note that the proposed approach allows distinguishing true biological responses to perturbation from batch-specific artifacts. Results revolve around experiments on three datasets related to various perturbations (chemical, genetic, and both), and the proposed method is compared against two alternatives from the literature. Performance metrics are 1) image quality, 2) classification accuracy based on the mode-of-action multi-class classification problem. The proposed method improves consistently over the state of the art on both metrics. Additional ablation studies confirm the design choices made by the authors. ## POST REBUTTAL Thanks for your detailed answers to my questions, and for the new results, which in my opinion strengthen your work. I have modified my score accordingly. Claims And Evidence: * Claim 1: CellFlow generates high-fidelity images. This claim is partially supported by the experiments. The use of custom-made FID and KID scores is appropriate, but I am afraid that the number of generated samples (5k) is too small. Indeed, the FID score is known to have high variance when calculated on smaller sample sizes. Using only 5k images might increase the likelihood of statistical noise affecting the scores. I am afraid that the differences between methods might diminish with larger sample sizes, suggesting that the superiority of CellFlow over baselines might be less pronounced if evaluated on 50k images. * Claim 2: The generated images capture meaningful biological patterns. This claim is partially supported by the experiments. In practical terms, an image classifier (e.g.a CNN) is trained on real, experimentally observed images of perturbed cells, where the ground-truth labels are the known MoAs of the drugs applied, basically consisting in roughly 30 classes. The MoA classification accuracy is the percentage of synthetically generated images (by the CellFlow model) for which the classifier correctly predicts the MoA, compared to the known ground-truth MoA for that perturbation. However, class distribution is not properly discussed in the paper, and the problem is that it can be highly unbalanced, with certain classes being over-represented. Given this imbalance, accuracy as a metric could be misleading because it might be biased towards the majority classes. Alternative metrics such as macro-averaged and weighted F1-scores could be more appropriate. * Claim 3: CellFlow generalizes to out-of-distribution perturbations never seen during training. This claim is partially supported by Table 2.b. However, the table only report image quality metrics and not the MoA metric, giving only a partial assessment of the generalization capabilities of CellFlow. * Claim 4: CellFlow corrects batch effects by conditioning on control cells from different batches. By comparing control images with generated images, it can disentangle true perturbation-induced morphological changes from experimental batch artifacts. This claim is marginally supported in the paper * Claim 5: CellFlow enables bidirectional interpolation between cellular states due to the continuous and reversible nature of the velocity field in flow matching. This claim is true by construction. However, its implications are not discussed in the paper. Methods And Evaluation Criteria: * Datasets: in my opinion the proposed datasets used in the experimental protocol are appropriate. * Alternative methods: authors focus only on two baselines that also take control images into account. It would have been interesting to check the performance of other methods that do not do that, which would have strengthened the results and claims. * Metrics: while I think image quality metrics to be appropriate (modulo the fact that using 5k samples might not be sufficient), MoA accuracy might not, and F1-score metrics would be preferable. Theoretical Claims: The proof for Proposition 1 appears correct to me. Experimental Designs Or Analyses: Yes, I checked and my main comment is as follows. Section 2.2 goes great lengths in discussing the coupling problem that exists in the data. The proposed solution (in section 2.3) is valid, and simply translates into building appropriate training sets where initial and final distributions come from the same batch. My biggest concern revolves around the concept of time. In high-content imaging experiments, cells are fixed (chemically preserved) at a specific time point after the perturbation is applied. This process halts all biological activity, effectively “freezing” the cellular state. Typically, in a well-designed experiment, both control and perturbed cells are fixed at the same time to ensure that temporal evolution does not differentially affect the two groups. However, cells are dynamic living systems, and their morphology naturally changes over time due to growth, division, apoptosis, and other metabolic activities, even in control conditions. Even when control and perturbed cells are fixed at the same time, there can be issues: cells can be at different stages of the cell cycle or physiological state, perturbed cells might respond to the treatment (chemical or genetic) at different rates depending on their initial state, leading to asynchronous responses, stochasticity in biological processes cause variability in cell morphology over time. These aspects are marginally discussed in the paper: they are fundamental, key challenges, and I wonder what is the authors position on these issues. Supplementary Material: Yes, all of them. I started with checking the simple proof of proposition 1. Then I carefully read appendix D. Relation To Broader Scientific Literature: This paper weights more for the biology/cell community than the machine learning community: indeed, the methodological part of the paper is based on a well-established method appeared several years ago, and there are no major contributions apart from the discussion in section 2.2 and section 2.3. From the biology/cell literature point of view, I think this paper might represent an interesting advancement, but the experimental protocol and results are somehow shallow in this respect: focusing only on MoA accuracy, while important (and possibly improvable), does not give a sense of the real impact of this work in a, e.g., drug design context (as per the introduction in the paper). Essential References Not Discussed: N/A Other Strengths And Weaknesses: This is a beautiful paper, very clearly written, and majestically illustrated!! Other Comments Or Suggestions: Please, check the literature! There is another work called CellFlow, published in the MLGenX workshop associated to ICLR 2024. @inproceedings{ palma2024cellflow, title={cellFlow: a generative flow-based model for single-cell count data}, author={Alessandro Palma and Till Richter and Hanyi Zhang and Andrea Dittadi and Fabian J Theis}, booktitle={ICLR 2024 Workshop on Machine Learning for Genomics Explorations}, year={2024}, url={https://openreview.net/forum?id=xaLXV2j8vl} } Questions For Authors: * Would it be possible for you to report, in Table 2.a, F1-score instead of accuracy? See comments above * Would it be possible for you to report, in Tables 1 and 2, FID and KID scores computed on more than 5k samples? Maybe 50k is too much of a computational burden, but ramping up to 20k could be feasible. * Would you mind providing your views on the considerations above on the role of "time"? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer Yg4R for their appreciation of the paper’s beautiful writing and illustration, proper experimental design and evaluation, and strong relevance to the biology and cell imaging community. We address their comments below: --- > **Claim 1 (effect of sample size on FID/KID):** *Could FID/KID score differences diminish with larger sample sizes?* We added evaluations of different sample sizes on BBBC021 (1K–5K, only 6K test images available) and JUMP (10K–20K). As shown, **CellFlow consistently outperforms all baselines across sample sizes**, with 30–45% relative improvement, demonstrating robustness. ||1K FID|2.5K FID|5K FID|10K FID|20K FID|1K KID|2.5K KID|5K KID|10K KID|20K KID| |-|-|-|-|-|-|-|-|-|-|-| |PhenDiff|71.3|64.3|49.5|47.5|46.1|2.55|3.68|3.10|4.95|5.09| |IMPA|52.4|41.4|33.7|14.0|12.9|3.20|3.38|2.60|1.04|1.05| |CellFlow|34.7|25.2|18.7|8.5|7.5|1.67|1.90|1.62|0.63|0.63| --- > **Claim 2 (class imbalance in MoA prediction)**: *Could you report macro and weighted F1 scores in Table 2.a?* We now report **Macro-F1 and Weighted-F1** scores for MoA classification. CellFlow outperforms all baselines not only in accuracy but also in F1 metrics, addressing class imbalance concerns. ||Acc|Macro-F1|Weighted-F1| |-|-|-|-| |Groundtruth Image|72.4|69.7|72.1| |PhenDiff|52.6|33.6|52.1| |IMPA|63.7|40.2|64.8| |CellFlow|71.2|49.0|70.7| --- > **Claim 3 (OOD generalization)**: *Why are MoA metrics not reported for OOD perturbations in Table 2.b?* We now **include MoA metrics** for OOD perturbations. CellFlow significantly outperforms baselines in this setting as well, reinforcing the model’s generalization capability. ||Acc|Macro-F1|Weighted-F1| |-|-|-|-| |Groundtruth Image|88.0|85.0|88.0| |PhenDiff|9.6|9.3|7.4| |IMPA|16.0|10.0|13.1| |CellFlow|43.2|36.6|42.8| --- > **Claim 4 (batch effect)**: *Claim on batch correction is marginally supported.* CellFlow explicitly conditions on control images from the **same batch** to correct batch effect. In the table below (an extension of Table 2.d in the paper, with MoA evaluation added), we compare generation quality and MoA classification performance when using **same-batch** versus **other-batch** control initialization. Using controls from the same batch yields significantly better results, highlighting their importance for correcting batch effects and capturing true perturbation signals. ||FID$_o$|FID$_c$|KID$_o$|KID$_c$|MoA Acc|MoA Macro-F1|MoA Weighted-F1| |-|-|-|-|-|-|-|-| |CellFlow w/ Other Batch Init|23.7|71.9|2.08|2.09|48.2|32.9|48.4| |CellFlow w/ Same Batch Init|18.7|56.8|1.62|1.59|71.2|49.0|70.7| --- > **Methods and evaluation (alternative methods)**: *Comparison with more baselines would strengthen the claim.* Cell morphology prediction is a new task with only six baselines (Table 5 in Appendix). We included the **only two published methods** using control images; others are unpublished, lack code, or omit controls. We also added MorphoDiff (ICLR 2025), a diffusion-based method. Under our setup, CellFlow outperforms it in image quality and MoA metrics. ||FID$_o$|KID$_o$|FID$_c$|KID$_c$|MoA Acc|MoA Macro-F1|MoA Weighted-F1| |-|-|-|-|-|-|-|-| |MorphoDiff|65.8|7.99|114.1|7.97|38.3|24.5|34.2| |CellFlow|18.7|1.62|56.8|1.59|71.2|49.0|70.7| --- > **Experimental designs (time effect)**: *How does the method handle issues like fixation and cell cycle variation?* We appreciate this insightful question. Cell morphology naturally varies over time due to asynchronous cell cycles and stochastic responses. CellFlow models **distribution-level transformations** rather than single-cell alignments, enabling it to **average out temporal and biological variability and capture population-level perturbation effects**. We believe this distribution-to-distribution approach offers a principled solution to handling temporal variation in image-based profiling. --- > **Claim 5 (interpolation use case)**: *Implications of bidirectional interpolation not discussed.* > **Relation to broader science (real-world impact)**: *What is the impact of this work for drug design or discovery?* We appreciate the reviewer’s comment. Our core contribution is a **new problem formulation**—modeling perturbation effects as a distribution-to-distribution transformation—and a **principled solution** via flow matching. This leads to significantly stronger performance and unlocks several novel capabilities relevant to biology: for example, batch effect correction helps isolate true biological signals, and perturbation interpolation enables exploration of intermediate drug doses or timepoints. While full biological validation is beyond the scope of this paper, these capabilities lay the groundwork for future impact in drug design & discovery. --- > **Other comments (naming)** We will **rename our method to CellFlux** to avoid confusion. Thank you for bringing this to our attention. --- Thank you again for your detailed feedback! We will include all of them in the revised paper. --- Rebuttal Comment 1.1: Comment: Dear authors, thank you for the detailed responses to my questions and observations. Overall, the new results better support the claims in the paper, and for this reason I will raise my score. Good job! --- Reply to Comment 1.1.1: Comment: Dear Reviewer Yg4R, We are glad that our updated results and clarifications have addressed your concerns. Your suggestions have improved the quality of our work, and we will ensure that all of them are included in the revised paper. Thank you again for your time and constructive feedback!
Summary: This work introduces CellFlow, an image-generative model designed to simulate cellular morphology changes induced by chemical and genetic perturbations. It leverages flow matching, a generative modeling technique, to learn a distribution-to-distribution map that transforms unperturbed cell states into perturbed ones. CellFlow is evaluated on chemical, genetic, and combined perturbation datasets, showing significant improvements in FID scores and mode-of-action prediction accuracy compared to existing methods. Due to the nature of flow matching, the model allows for continuous interpolation between cellular states, offering a tool for studying perturbation dynamics and also the recovery dynamics if the transformation is considered in reverse time. ## update after rebuttal Thank you for the additional experiments and clarifications in response to my feedback. The updated results addressed many of my concerns, and I appreciate the authors' efforts to improve the quality of the work based on my suggestions. As a result, I have revised my evaluation from a 3 (Weak Accept) to a 4 (Accept), reflecting my updated assessment of the paper's strengths and improvements. I also commend the authors for their commitment to incorporating all suggestions into the revised version, which will likely enhance the overall contribution of the work. Claims And Evidence: The usefulness of the interpolated dynamics from a biological perspective might be limited as it could be the mathematical artifacts of the Flow Matching formulation rather than a reflection of a concrete biological process. Methods And Evaluation Criteria: The evaluation makes sense for conditional image generation in cell painting. Theoretical Claims: There is no major theoretical claim in this work. Proposition 1 makes intuitive sense and its proof in the appendix also seems correct. Experimental Designs Or Analyses: The considered datasets serve the purpose of this paper as they cover different types of perturbations and various cells. Supplementary Material: Yes, It was sufficient to support the main content (proposition, dataset description, etc). Relation To Broader Scientific Literature: - The use of flow matching in a novel context. - Addressing an important problem in biology and drug discovery which is the effect of perturbations on cells. Essential References Not Discussed: - Other Strengths And Weaknesses: ## Strengths - Innovative Approach: Although flow matching is relatively explored as a conditional distribution matching framework, its application in cellular morphology seems novel. - Performance: The model demonstrates significant improvement (35% in FID scores and 12% in mode-of-action prediction accuracy) over existing methods. - Biological Relevance: The generated images are biologically meaningful and capture perturbation-specific morphological changes. Potential Applications: The ability to interpolate between cellular states could be a valuable tool for studying perturbation dynamics even though the dynamics may not be biologically plausible. ## Weaknesses: - Technical Novelty: Although the application of Flow Matching in this context may be novel, the technical novelty of the work is limited to using an existing method in a new domain. In the absence of a major technical novelty, a more extensive experimental setup is expected to show the utility of the method as an application-driven work. - Complexity: The computational complexity of the method is not discussed. This could bring more clarity on the scalability of the approach. - Control Wells: The importance of control wells is highlighted, but the paper does not detail how variations in control conditions are managed. For example, the wells without perturbations may exhibit different batch effects simply because of being in different wells. Other Comments Or Suggestions: - Questions For Authors: - How does CellFlow handle variations in control conditions across different batches, and how does this impact the model's predictions? - What are the computational requirements for training CellFlow, and how does it scale with larger datasets or more complex perturbations? - The nature of the method that takes perturbation as input to the velocity network allows for interpolation over the perturbation axis. Given the complexity of biological systems, can such interpolation be trusted? Can authors provide the assumptions about the underlying system that justify trusting the predicted effect of unseen perturbations? - It is imaginable that there is some transfer learning of the effect of perturbations from one dataset to another. For example, when a perturbation is not present in a dataset, observing its effect in another dataset could be helpful to fill the gap in the target dataset. Does CellFlow allow for such transfer of knowledge across datasets? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer Z8BU for recognizing the paper’s proper evaluation criteria, sound theoretical claims, well-designed experiments, comprehensive supplementary material, as well as its innovative approach, strong performance, and biological relevance. We address their concerns below: --- > **Technical novelty**: *How does the work contribute technically beyond applying flow matching to a new domain?* We thank the reviewer for this opportunity to clarify. Our key contribution lies not in modifying flow matching itself, but in **reformulating the problem of cell morphology prediction**: we propose modeling this task as a **distribution-to-distribution generation problem**, rather than the standard noise-to-distribution or single-to-single image prediction. This is a **conceptual shift** that aligns better with how perturbations affect heterogeneous cell populations, and allows for novel capabilities like batch effect correction and perturbation interpolation. Flow matching provides a principled and efficient tool to solve this reframed problem. --- > **More experimental settings**: *In the absence of major technical novelty, a more extensive experimental setup is expected.* We would like to highlight that our experimental setup is **exhaustive** compared to existing works: - **Perturbations**: chemical, genetic, and combined. - **Datasets**: BBBC021, RxRx1, and JUMP. - **Settings**: both **in-distribution (ID)** and **out-of-distribution (OOD)** generalization. - **Evaluation**: includes **overall FID / KID**, **conditional FID / KID**, and **MoA classification**. - **Capabilities**: our model supports **batch effect correction** and **perturbation interpolation**—capabilities that existing methods do not support. --- > **Computational complexity**: *What are the training requirements and scalability for larger datasets or complex perturbations?* CellFlow is **computationally efficient** and scales **linearly** with dataset size. We already provided the training details in *Appendix C*: “Models are trained for 100 epochs on 4 A100 GPUs ... requiring 8, 16, and 36 hours for BBBC021, RxRx1, and JUMP, respectively”. --- > **Control wells**: *How does CellFlow handle variations in control conditions across batches, and how does this impact the model's predictions?* CellFlow explicitly conditions on control images from the same batch to account for batch-specific variations *(discussed in Section 2.3 in the paper)*. This means that for each perturbed sample, the model is provided with the corresponding unperturbed (control) cells from the **same experimental batch**, enabling it to distinguish true perturbation effects from batch-induced variability. In the table below *(an extension of Table 2d in the paper, with MoA evaluation added as suggested by Reviewer Yg4R)*, we compare generation quality and MoA classification performance when using **same-batch** versus **cross-batch** control initialization. Using controls from the same batch yields significantly better results, highlighting their importance for correcting batch effects and capturing true perturbation signals. ||FID$_o$|FID$_c$|KID$_o$|KID$_c$|MoA Accuracy|MoA Macro-F1|MoA Weighted-F1| |-|-|-|-|-|-|-|-| |Condition on control wells from different batch|23.7|71.9|2.08|2.09|48.2|32.9|48.4| |Condition on control wells from same batch|18.7|56.8|1.62|1.59|71.2|49.0|70.7| |Relative improvement|+21.1%|+21.0%|+22.1%|+23.9%|+47.7%|+49.0%|+46.1%| --- > **Interpolation trustworthiness**: *Can interpolation in perturbation space be trusted biologically?* We emphasize that interpolation is a **novel and unique capability** of CellFlow, not supported by existing computational tools. Verifying intermediate cell states is inherently difficult, as current biotechnologies do not capture large-scale video-like morphological changes. Despite this limitation, domain experts have highlighted the potential verification of this feature—for example, in modeling dose-response curves (e.g., predicting medium-dose effects from high/low doses) or time-course dynamics (e.g., estimating 36h outcomes from 24h and 48h measurements). While our paper focuses on the **methodological foundation**, exploring these applications through biological validation is a promising direction for future work. --- > **Cross-dataset transfer**: *Can CellFlow generalize perturbation effects across datasets?* Thank you for the insightful question. We conducted a **transfer experiment** by applying a CellFlow model trained on BBBC021 to RxRx1 and JUMP images. These datasets lack ground-truth perturbed counterparts, making quantitative evaluation infeasible. However, we observed that CellFlow is able to transfer and apply perturbation effects despite substantial domain shifts, with **qualitative examples** available at https://anonymous.4open.science/r/CellFlow-Rebuttal/cross_dataset.png. --- Thank you again for your detailed feedback! We will include all of them in the revised paper. --- Rebuttal Comment 1.1: Comment: Thank you for the response and additional experiments. The clarifications addressed some of my concerns and I have updated my score accordingly. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Z8BU, We’re glad that our updated results have addressed your concerns and that you are willing to improve your evaluation of our work as a result. Your suggestions have enhanced the quality of our work, and we will ensure that all of them are included in the revised paper. Thank you again for your time and constructive feedback!
null
null
null
null
null
null
null
null
Learning without Isolation: Pathway Protection for Continual Learning
Accept (poster)
Summary: The paper introduces a novel continual learning method specifically designed to address the problem of catastrophic forgetting, which is a significant challenge in the field of machine learning when models are required to learn from a sequence of tasks. The proposed approach leverages pathway protection techniques, which aim to preserve the essential parts of the neural network that are crucial for previously learned tasks while allowing the network to adapt to new tasks. Unlike traditional methods that focus on isolating parameters for each task or using memory replay to store past experiences, pathway protection emphasizes maintaining the integrity of critical pathways within the network. This mechanism not only helps to safeguard previously learned representations but also facilitates the efficient integration of new knowledge, enabling the model to continue learning without losing valuable information from prior tasks. By doing so, the proposed method aims to strike a balance between retaining old knowledge and acquiring new skills, ultimately leading to more robust and scalable continual learning systems. Claims And Evidence: The claims presented in the paper are largely substantiated by comprehensive experimental results, demonstrating the effectiveness of the proposed method for mitigating catastrophic forgetting in continual learning scenarios. Nevertheless, there appear to be some details missing regarding the use of OT in Section 3.2. OT is typically used to find the minimum effort required to transport one distribution to another. However, in the paper, you use OT, specifically the Sinkhorn approximation, to 'transform a binary 0-1 matrix into a soft matching matrix with a sum of 1 through a process of bi-directional relaxation.' Could you kindly clarify how this approach relates to the traditional use of OT? It would be helpful to elaborate on this process further in the main text. Methods And Evaluation Criteria: While the authors have demonstrated the feasibility of the proposed method across various techniques, datasets, and architectures, I have a few suggestions for further improvement. 1.Although the authors evaluated forgetting rates using the ResNet18 architecture on the CIFAR-100 dataset, it would still be beneficial to test the forgetting rates of ResNet32 on CIFAR-100 and ResNet18 on the Tiny-ImageNet dataset. 2.Related works that rely on different paths in a network and sparsity are not cited and compared to qualitatively or empirically [1,2,3]. Theoretical Claims: The authors have provided a detailed explanation of the theories used. 1. Pathway Protection Enhances Knowledge Retention (Guaranteed by sparsity) The theoretical foundation of the proposed method is rooted in the concept of pathway protection, which is designed to safeguard key pathways or network parameters that represent learned knowledge from previous tasks. 2. Graph Matching Method The authors propose a graph-based method for integrating weights from previous models into the current model, which theoretically capitalizes on the structural similarity between shallow and deep layers. Shallow layers are assumed to share common representations across tasks, while deeper layers specialize in more task-specific features. Experimental Designs Or Analyses: 1.The authors have selected a wide range of well-established benchmark datasets for their experiments, including CIFAR-10, CIFAR-100, and Tiny-ImageNet, to assess the effectiveness of their proposed method in various continual learning settings. 2.The authors clearly define their experimental setup, including the use of ResNet18 and ResNet32 architectures for evaluating the accurate rates on the given datasets. 3.One of the strengths of the experimental design is the inclusion of a series of ablation studies to evaluate the individual components of the proposed method. These studies help isolate the effects of key factors, such as pathway protection and the graph-based integration of model weights, on the overall performance. By systematically varying the design choices (e.g., whether pathway protection is applied or not), the authors are able to demonstrate the contributions of each component to the method’s success in mitigating catastrophic forgetting. 4.The paper includes experiments in both task-incremental and class-incremental settings, two common setups in continual learning. Supplementary Material: Yes, I have reviewed the supplementary material. I focused primarily on the detailed experimental results and the implementation specifics, which provide valuable insights into the reproducibility of the proposed method. The additional experiments and ablation studies included in the supplementary material helped clarify several aspects of the method that were briefly touched upon in the main paper, especially regarding the sensitivity of the method to different configurations. I also found the detailed pseudo-code and additional analysis of model sensitivity to hyperparameters particularly helpful for understanding the inner workings of the approach. However, some additional clarification could be provided on certain figures and tables in the supplementary material, specifically those illustrating the comparative performance across different model architectures and datasets. This would further aid in understanding the scope and limitations of the proposed approach. Relation To Broader Scientific Literature: The proposed method in this paper draws inspiration from concepts found in neuroscience, particularly regarding how the human brain handles learning and memory retention over time. In continual learning, one of the primary challenges is catastrophic forgetting—where the model "forgets" previously learned information as new tasks are learned. This mirrors how human brains can forget previously learned information when new knowledge is acquired, a phenomenon known as interference in cognitive psychology. Recent research in neuroscience has shown that the brain employs mechanisms such as synaptic consolidation and neuroplasticity to minimize forgetting. In particular, the brain strengthens and modifies synaptic connections between neurons as new information is learned while maintaining previously established pathways. This process ensures that long-term memories are not easily overwritten. The method proposed in this paper mimics this idea by implementing pathway protection strategies, which involve protecting previously learned features or representations during the learning of new tasks, in a manner similar to how the brain "protects" neural connections when learning new information. Essential References Not Discussed: Related works that rely on different paths in a network and sparsity are not cited and compared to qualitatively or empirically, such as [1, 2, 3]. [1] PathNet: https://arxiv.org/abs/1701.08734 [2] DEN: https://arxiv.org/abs/1708.01547 [3] APD: https://arxiv.org/abs/1902.09432 Other Strengths And Weaknesses: Stengths: 1. The authors propose a method called Learning without Isolation (LwI), where, at each step in continual learning, a new model is trained on a task while distilling knowledge from an existing model. This idea aligns closely with the concept of information flow in the human brain, while also integrating the characteristics of neural networks into this research. 2. The authors conceptualize the model’s parameters as a graph, they construct paths as convex combinations of new and old model weights, applying a permutation matrix to the new model's weights. For shallow layers, the permutation matrix is determined by the similarity between adjacent layer weights, while for deeper layers, it is based on the negative similarity between graph nodes. This encourages shared paths across different tasks, promoting feature reuse in the shallow layers, and fosters the use of distinct paths in the deeper layers. 3. Extensive experiments are conducted on image classification continual learning tasks, utilizing benchmarks such as CIFAR-10, CIFAR-100, and Tiny-ImageNet, as well as using different architectures of neural networks to validate the effectiveness of proposed method. The authors examine both task-agnostic and task-aware settings. Weaknesses: In the experiments presented in the paper, the authors evaluate the method on a maximum of 20 tasks. This raises an important question: how does the model perform when the number of tasks is increased by an order of magnitude? It seems reasonable to expect that as the number of tasks grows, there will be a tradeoff between the network's ability to accommodate more tasks and the overall model size. This tradeoff is likely to impact both the efficiency of the network and its capacity to preserve knowledge across tasks. Other Comments Or Suggestions: Please address the concerns I raised in the weaknesses section. Based on your responses, I would be happy to reconsider my scores. Questions For Authors: Please check the above concerns. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Response to Claims:** Thanks for pointing out the issue. We would like to explain it as follows. 1. In continual learning, considering the lack of correspondence between neurons in Model 1 and Model 2, it is possible that the function of the p-th neuron in Model 1 is very similar to that of the (p+1)-th neuron in Model 2, despite their different positions. Therefore, when using the OT algorithm in neural networks, **we aim to optimally (minimizing cost) transfer neurons from a specific layer in Model 1 to the corresponding layer in Model 2**, achieving model alignment. 2. **The advantages of using a soft matching algorithm.** - Using entropy regularization and bidirectional relaxation ensures that the optimization function is smoother. - **One-to-one matching hinders knowledge sharing.** One weight in Model 1 might be similar to multiple weights in Model 2. In such cases, using one-to-one matching may set the corresponding weights with similar knowledge to zero, hindering knowledge sharing. 3. We compared direct fusion and fusion using hard matching, and the results show that soft matching is more advantageous. ResNet18-based model on the CIFAR-100 dataset, measured under task-aware scenarios. |Method|5 splits|10 splits|20 splits |-|-|-|- |Ours w/o alignment|66.13±0.98|68.80±1.65|70.30±1.27 |Ours with OT|75.73±0.58|77.73±0.76|80.60±0.71 |**Ours**|**81.10±0.80**|**84.90±0.36**|**86.49±0.55** **Response to Essential References:** Thanks for the meaningful suggestions. We indeed omitted the comparisons in our manuscript. **Comparison with [1]** 1. **The different approaches for protecting task-related knowledge.** [1] employs a genetic algorithm. It selects the parameter subset for the next task while fixing the important parameters of the previous tasks, whereas our method achieves knowledge protection through a matching-based approach. 2. [1] does not train frozen parameters, whereas we train all parameters. **Comparison with [2]** 1. [2] selectively retrains network parameters and adapts to different tasks by dynamically changing neurons. However, our method protects knowledge of different tasks by finding the optimal pathways for each task and using matching techniques. 2. The method proposed in [2] dynamically increases network capacity. In contrast, our method uses a fixed network capacity approach. **Comparison with [3]** 1. [3] protects knowledge through parameter decomposition and masking techniques. However, our method protects important pathways through matching techniques. 2. [3] focuses on protecting network connections between adjacent layers, whereas we consider a complete pathway from input to output. **Response to Weakness:** Thanks for the valuable question. - Regarding the number of learnable tasks, our method employs soft protection, allowing for a greater number of learnable tasks. Compared to hard mask methods like SupSup, we use all network parameters and protect important task paths through misaligned fusion. Based on the reviewer's suggestions, we conducted experiments on Tiny-ImageNet with 100 tasks, each comprising two classes. The results show that our method performs very well. - To verify the performance of our method when increasing the task number by an order of magnitude, we split Tiny-ImageNet into 100 tasks and conducted tests using different methods. The results show that our method achieves better performance. | Method | SPG |SPU |WSN |EWC |LwF |Supsup | Ours | | --- | --- | --- | --- |---| --- | --- | --- | | 100 splits | 43.40±0.45| 52.90±1.02| 60.60±0.43| 50.61±0.61| 56.82±0.55| 49.47±0.52 | **64.63±0.14**| --- Rebuttal Comment 1.1: Comment: The rebuttal emphasizes the strengths of the proposed method in optimal knowledge transfer, effective task protection, and scalability. By leveraging the OT algorithm for neuron alignment and soft matching techniques for smoother optimization, the method ensures better knowledge retention compared to existing approaches. Unlike dynamic neuron adaptation or hard-masking methods, it protects knowledge via a fixed network capacity approach while enabling a complete knowledge flow from input to output. Experiments on CIFAR-100 and Tiny ImageNet (100 tasks) further validate its superior performance and scalability. Based on these points, I have decided to raise my score to 4. --- Reply to Comment 1.1.1: Comment: **Many thanks for raising the score!** Thank you very much for your insightful suggestions, which have been greatly enlightening and are crucial for enhancing the quality of our paper! Comparing our approach with provided methods and the latest techniques will enhance the competitiveness of our paper. Additionally, investigating OT methods will contribute to improving the quality of our research. The discussion regarding more detailed issues has further deepened the analysis of the experimental results, which can enhance the persuasiveness of our paper. We will adhere to these suggestions in the final version and also revise the paper according to all other comments.
Summary: The paper proposes a novel continual learning framework that assigns distinct neural pathways to different tasks, enabling knowledge retention while replacing traditional masking & pruning methods. The authors use graph matching for model fusion, leveraging neural network properties by maximizing similarity alignment in shallow layers and minimizing it in deeper layers. Knowledge distillation is applied to constrain parameter deviations. The framework is validated on networks of different sizes and datasets of different sizes. Claims And Evidence: Most claims made in the submission are clear. However, there are some areas that need further clarification: 1. Although the 'Activation level' in Figure 2 is explained in the text, some aspects remain unclear. Could the authors provide a more detailed explanation of the results shown in Figure 2? 2. Claim about the trade-off between performance and cost. The submission asserts that pathway protection maintains high performance while reducing computational overhead. While the experiments show promising results in terms of performance, a more detailed analysis comparing the computational cost (e.g., memory usage, FLOPs) would provide stronger evidence. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, no issues found Experimental Designs Or Analyses: Yes. 2. Although multiple datasets were used, no corresponding comparative experiments were conducted on larger datasets, such as ImageNet-R. Supplementary Material: Yes. Analysisoftimecomplexity and ImplementationDetails. Relation To Broader Scientific Literature: The method contributes to the broader scientific literature by addressing the continual learning problem with a novel method. Essential References Not Discussed: This manuscript lacks comparisons with some relevant approaches, such as [1, 2, 3]. [1] "NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks." International Conference on Machine Learning. [2] "Spacenet: Make free space for continual learning." Neurocomputing. [3] "Sparcl: Sparse continual learning on the edge." Advances in Neural Information Processing Systems. Other Strengths And Weaknesses: Strengths: 1. The paper explores the lottery ticket hypothesis under neural network sparsity. By analyzing the complete pathway from input to output, it ensures task-specific pathway preservation. 2. To validate the effectiveness of the proposed framework, experiments are conducted on networks of different sizes across datasets of varying complexity. 3. The paper is well-structured and written, enhancing readability and facilitating a deeper understanding of the content. Weaknesses: The authors need to conduct experiments on larger datasets (Imagenet-1K?) and backbones (maybe resnet50? vit?). Other Comments Or Suggestions: Please check the above concerns. Questions For Authors: Please check the above concerns. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ****Response to C1:**** Thank you for the reviewer's reminder. We sincerely apologize for the misunderstanding caused by our negligence. A precise description will be provided in future revisions. The concept of "Activation Level" refers to the average magnitude of the weights obtained after activation in the last layer of the feature extraction phase. We utilize activation levels to measure whether pathways associated with different tasks can be distinguished. Through the left side of Figure 2, we observed that among these channels, in our method, there consistently exists a channel specific to a task. In contrast, for the LwF method, the activation levels of its channels exhibit a phenomenon of intermixing. ****Response to C2:**** Thanks for pointing out the issue. We would like to explain it as follows. - We conducted a corresponding analysis, including the optimization of our proposed method in terms of time complexity. In the context of our hierarchical matching approach, we analyze its time complexity as follows. Given a deep network with $N_L$ layers, each containing $C$ channels, traditional graph matching incurs a time complexity of O(N^4), where $N$ denotes the total number of nodes in the graph. However, by employing a hierarchical matching strategy for deep networks, we can compute the time complexity separately for each layer and then aggregate the results. Consequently, the overall time complexity of our approach is: $$ \begin{split} & O(\sum_{1}^{N_L} C^4) = O(\sum_{1}^{N_L} (\frac{N}{N_L})^4) = O(\frac{1}{N_L^3} N^4). \end{split} $$ - Although graph matching is generally an NP-hard problem, using bilateral relaxation transforms it into a solvable problem by converting the general graph matching problem into a bipartite graph matching problem. The original graph matching problem may involve graphs with arbitrary topological structures, but bipartite graph matching is relatively easier to solve computationally. In this paper, we use the Sinkhorn algorithm, which is a polynomial-time algorithm. ****Response to Essential References:**** Thank you for providing these papers. We indeed omitted the comparisons in our manuscript. We will incorporate references to these papers in our study and conduct comparative analyses accordingly. **Comparison with [1]**: 1. This method, like ours, focuses on **protecting connection pathways**. 2. **The specific connection points protected are different**. [1] focuses on protecting network connections between adjacent layers. 3. **The methods for preserving knowledge from previous tasks differ**. [1] protects important connections for previous tasks by freezing them, whereas we use a soft protection method. 4. **The number of trainable parameters for the next task is different**. [1] does not train frozen parameters. 5. **The protection of data privacy protection is different**. [1] requires storing previous data, whereas our method does not require storage. **Comparison with [2]**: 1. **The different approaches for protecting task-related knowledge**. [2] protects task-specific knowledge by compressing and safeguarding neurons that are crucial for particular tasks. In contrast, our method achieves task knowledge protection through matching-based approach. 2. **[2] aims to reduce the interference between different tasks, whereas our method achieves knowledge sharing through interference.** **Comparison with [3]**: 1. **The different approaches for protecting task-related knowledge**. [3] employs dynamic masking (including for parameters and gradients) to protect task knowledge, whereas our approach utilizes a matching-based method. 2. **Our method facilitates the propagation of knowledge, while [3] prohibits it**. [3] employed in this paper utilizes hard masking, which is detrimental to the sharing of common knowledge across different tasks. 3. **The protection of data privacy protection is different**. [3] requires storing previous data. We replicated two of the above methods and integrated them into our framework for comparative analysis. ResNet32-based model on the CIFAR-100 dataset. |Method|5 splits|10 splits|20 splits |-|-|-|- |NISPA [1]|69.34±0.81|73.41±0.32|76.42±0.27 |Sparcl [3]|72.04±0.48|75.08±0.81|77.40±0.21 |**Ours**|**76.10±0.33**|**81.12±0.90**|**83.19±0.35** ****Response to Weaknesses:**** We conducted corresponding experimental tests on the ImageNet-R dataset and also performed tests using ResNet50. **(1) ResNet32-based model on the Imagenet-R.** |Method|5 splits|10 splits|20 splits| |-|-|-|-| |EWC|24.6|27.1|29.3 |LwF|25.0|28.6|30.3 |RWalk|26.1|28.4|26.6 |WSN|28.2|30.4|32.1 |SPG|23.2|24.1|22.7 |SPU|26.2|27.6|27.1 |GPM|26.7|27.1|29.3 |Ours|**32.8**|**34.3**|**37.3** **(2) ResNet50-based model on the Cifar100** |Method|5 splits|10 splits|20 splits| |-|-|-|-| |EWC|70.9|62.7|57.6 |LwF|79.2|80.3|82.7 |RWalk|70.3|66.3|62.1 |WSN|80.2|81.4|84.3 |SPG|55.3|57.2|55.1 |SPU|60.5|62.1|61.7 |Ours|**84.4**|**87.5**|**89.7** --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. Most of my concerns have been addressed. I will increase my score accordingly. --- Reply to Comment 1.1.1: Comment: **Many thanks for increasing the score!** We sincerely thank the reviewer for the valuable feedback and recognition. We are glad that the additional comparisons and experimental results addressed the concerns and contributed positively to the evaluation. We will incorporate these newly added analyses and experimental results into the final version of the paper to further strengthen the presentation and completeness of our work.
Summary: This paper proposes a new framework for continual learning (CL) called Learning without Isolation (LwI), which introduces pathway protection as a mechanism to mitigate catastrophic forgetting. Unlike traditional CL methods that focus on parameter protection, LwI prioritizes preserving activation pathways in deep neural networks, inspired by neuroscientific principles of sparsity in neural activations. The model fusion process is framed as a graph matching problem, where activation pathways in neural networks are aligned using a similarity-based approach. The proposed method is rehearsal-free and the authors evaluate it on CIFAR-100 and Tiny-ImageNet, using ResNet-32 and ResNet-18, demonstrating that the proposed method outperforms the baselines. Claims And Evidence: Claims with support: - The authors argue that parameter protection leads to task isolation and inefficient parameter usage. Empirical results show that LwI outperforms parameter isolation methods (e.g., WSN) in both task-aware and task-agnostic settings. - The paper introduces a graph-matching approach to align activation pathways before merging models. The method achieves better performance, as shown in ablation studies, where removing pathway matching leads to lower accuracy. Claims that need further support: - The paper states that graph matching has a complexity of O(N^4), but uses a layer-wise approach to reduce computational cost. No detailed runtime analysis or comparison with other CL methods is provided. - The paper argues that the proposed method is effective. The proposed method does outperform the baselines, but the accuracies are too weak as there existing methods such as [1] that perform significantly better on both CIFAR-100 and Tiny-ImageNet. For example, [1] is a rehearsal-free method and it achieves more than 65% accuracy on task-agnostic CIFAR-100 10-splits while the proposed method achieves only 30% accuracy. [1] A theoretical study on solving continual learning. NeurIPS 2022 - The authors claim that parameter-isolation methods need to know the task identity for inference. However, this is not correct. Task incremental learning (TIL) methods such as parameter isolation can be task-agnostic as theoretically demonstrated in [1]. Methods And Evaluation Criteria: The proposed methods and evaluation criteria largely align with the problem of continual learning. Theoretical Claims: No theoretical analysis was provided. Some arguments made by the authors need theoretical justification. For example, "Simple averaging may lead to interference and even cancellation of effective components, a concern exacerbated during continual learning" in lines 104-107. Experimental Designs Or Analyses: The experimental design in the paper is generally well-structured, but there are some areas that could be improved or clarified. - CIFAR-100 and Tiny-ImageNet are widely accepted benchmarks for continual learning. The datasets are split into 5, 10, and 20 tasks, allowing the evaluation of performance under different levels of granularity. - Missing comparisons with some recent CIL methods such as [1] and parameter-isolation methods such as SupSup [2]. [1] A theoretical study on solving continual learning. NeurIPS 2022 [2] Supermasks in superposition. NeurIPS 2022 - Task-incremental learning (TIL) methods such as WSN can also be used for task-agnostic evaluation [1]. Supplementary Material: I checked section B, implementation details. Relation To Broader Scientific Literature: - Relation to CL: LwI introduces a new paradigm where pathway protection is emphasized over parameter protection, leveraging graph matching for model fusion. - Relation to model fusion and graph matching: Moves beyond naive weight averaging by using structured pathway alignment, ensuring better knowledge transfer between tasks. Essential References Not Discussed: Refer to the previous comments Other Strengths And Weaknesses: NA Other Comments Or Suggestions: There are several minor mistakes. - ”Activation Level” in lines 087. In latex, use `` for opening a quotation. This incorrect quotation appears in other places throughout the paper. - "... over-parameterized deep deep networks to allow flexibility for future tasks." The word 'deep' appeared twice Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: **Response to C1:** Thank you for the reviewer's reminder. - We conducted a corresponding analysis, including the optimization of our proposed method in terms of time complexity. In the context of our hierarchical matching approach, we analyze its time complexity as follows. Given a deep network with $N_L$ layers, each containing $C$ channels, traditional graph matching incurs a time complexity of O(N^4), where $N$ denotes the total number of nodes in the graph (All neurons in the neural network). However, by employing a hierarchical matching strategy for deep networks, we can compute the time complexity separately for each layer and then aggregate the results. Consequently, the overall time complexity of our approach is: $$ \begin{split} & O(\sum_{1}^{N_L} C^4) = O(\sum_{1}^{N_L} (\frac{N}{N_L})^4) = O(\frac{1}{N_L^3} N^4). \end{split} $$ - To address the reviewers' concerns, we tested the time cost and found that due to the addition of the matching fusion module, our method's runtime is second only to LwF. **Runtime of ResNet32 on CIFAR-100 (min)** |Method|5 splits|10 splits|20 splits| |-|-|-|-| |EWC|157|159|201 |LwF|105|145|173 |RWalk|225|236|260 |WSN|276|298|321 |Ours|110|157|186 **Response to C2 and Experimental2,3:** Thank you for the reviewer's reminder. - During inference, the space and time complexity of [1] are both **O(N)**, where N is the number of tasks. In contrast, our method has both time and space complexity of **O(1)**. To ensure a fair comparison under **O(1)** complexity, we tested using the mask of the last task as well as the intersection of masks from different tasks, ensuring that the time complexity during testing remains **O(1)**. The results of methods such as **WSN** and **GPM** were obtained through comparisons in task-incremental learning. **ResNet18-based model on the Cifar100.** |Method|10 splits|20 splits| |-|-|-| |Ours|**50.9**|**47.7** |HAT + [1]\(Final)|10.2|12.9 |HAT + [1]\(Intersection)|13.5|15.4 |SupSup + [1]\(Final)|15.2|18.8 |SupSup+ [1]\(Intersection)|18.7|23.4 - Due to differences in the **training environment, dataset splitting methods, and training details** (e.g., we use **200 epochs**, whereas [1] uses **700 and 1000 epochs**), the final training results vary. Additionally, the baseline methods in our paper are implemented based on the code from **[2]**, which aligns well with the corresponding results in **[2]**. We integrated the **LwF** method into the training process of **[1]**, leading to the following results: **ResNet18-based model on the Cifar100.** |Method|10 splits|20 splits| |-|-|-| |Ours|**50.9**|**47.7** |LwF|42.2|40.8 [2] Class-incremental learning: survey and performance evaluation on image classification. TPAMI,2022. **Response to C3 and Experimental2,3:** Thank you for the reviewer's reminder. Thank you for the reviewer's reminder. The reviewer is indeed correct—it is possible to infer the task ID, just not using the approach in [1]. The SupSup paper presents an algorithm for task ID inference based on optimization rather than an O(N) complexity method. However, the inferred task ID may not always be accurate (especially when the number of tasks is large, such as 100 tasks). When we compared this approach with ours, we found that our method outperforms it in task-agnostic scenarios. Lastly, we will revise our wording and include additional experiments in task-agnostic settings. |Method|Supsup|Ours |-|-|- |100 splits|7.47 |**13.63** **Response to Theory:** We greatly appreciate the reviewer for pointing out this issue. We have performed the corresponding theoretical derivation and will include it in the paper. The detailed derivation process for 1 and 2 can be found in the response to Reviewer mxBD. ### 1. Forgetting Bound $$\mathcal{F}(T) \leq \eta\sqrt{2(1-\kappa_S)} \, \text{(Shallow)}+\lambda \epsilon \, \text{(Deep)}+O\left(\sqrt{\frac{\log T}{T}}\right)$$ ### 2. Task Number $$T_{\max} \leq \frac{C}{2\epsilon} \log(1 + \frac{C}{\epsilon \Delta^2})$$ We divided Tiny-ImageNet into 100 tasks and compared different approaches. |Method| SPG |SPU|WSN|EWC|LwF|Supsup|Ours |-|-|-|-|-|-|-|- |100 splits|43.40|52.90|60.60|50.61|56.82|49.47|**64.63** ### 3. Naive Avg Bound: $$\mathcal{F}_{\text{avg}} \geq \frac{1}{2}\sqrt{\sum \delta_c^2} + \lambda C + \mathcal{O}(1)$$ - $\mathcal{F}_{\text{avg}}$: Forgetting measure (naive averaging) - $\delta_c$: Channel misalignment distance for filter $c$ When channel alignment quality $\kappa_S > 0.5$ and overlap ratio $\epsilon < 1/C$, the forgetting ratio satisfies: $$ \frac{F_{\text{LwI}}}{F_{\text{avg}}} \approx \sqrt{2(1-\kappa_S)} \, \text{(alignment gain)} \cdot \epsilon \, \text{(sparsity gain)} + O(T^{-1/2}) $$ 1. Our method provides an **upper bound** on forgetting 2. Naive averaging only provides a **lower bound** 3. Experiments |Method|5 splits|10 splits|20 splits |-|-|-|- |Naive Avg|72.13 |74.80 |75.30 |Ours| **81.10** | **84.90** |**86.49**
Summary: The paper introduces a novel approach to continual learning, termed "Learning without Isolation" (LwI), which aims to mitigate catastrophic forgetting by protecting distinct activation pathways for different tasks within a deep network. The key idea is to allocate unique pathways for each task, ensuring that knowledge from previous tasks is preserved while learning new tasks. The authors propose a data-free continual learning method based on graph matching, which aligns channels in the deep network before model fusion. This approach is inspired by the sparsity of activation channels in neural networks and the hierarchical structure of the brain. The method is evaluated on CIFAR-100 and Tiny-Imagenet datasets, demonstrating superior performance compared to existing continual learning methods, particularly in task-agnostic scenarios. Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence, particularly through extensive experimental results on CIFAR-100 and Tiny-Imagenet datasets, which demonstrate the method's superiority over state-of-the-art continual learning approaches. The ablation studies further validate the effectiveness of pathway protection and graph matching. However, some claims, such as the scalability to larger models and the theoretical foundations of pathway protection, lack sufficient evidence. The paper primarily validates the method on smaller models (ResNet32, ResNet18), and a deeper theoretical analysis is absent. Addressing these gaps by including experiments on larger models and providing more theoretical insights would strengthen the overall credibility of the claims. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for addressing the problem of continual learning, particularly in mitigating catastrophic forgetting. The use of benchmark datasets like CIFAR-100 and Tiny-Imagenet, along with task-agnostic and task-aware evaluations, provides a robust framework for assessing the method's performance. Theoretical Claims: The authors analyze one layer of a deep network channel, and first-order Taylor expansion is used for analysis. It's been correctly demonstrated. Experimental Designs Or Analyses: Please refer to Claims And Evidence part Supplementary Material: Yes, I reviewed the supplementary material, which includes additional experimental results, ablation studies, and implementation details. The supplementary material provides further validation of the proposed method, including experiments on different similarity measurement formulas (Euclidean distance vs. cosine similarity), the impact of the knowledge distillation module, and the effectiveness of the task diversion module. It also includes details on the experimental setup, such as hyperparameters and training procedures, which enhance the reproducibility of the results. Relation To Broader Scientific Literature: The proposed method of pathway protection via graph matching builds on prior work in regularization-based, rehearsal-based, and architecture-based continual learning approaches. Essential References Not Discussed: No Other Strengths And Weaknesses: ## Paper Strengths The paper presents a novel and theoretically grounded approach to continual learning by leveraging pathway protection and graph matching. This is a significant departure from traditional methods that rely on regularization, rehearsal, or dynamic architectures. The proposed method does not require storing data from previous tasks, which is a significant advantage in terms of data privacy and storage efficiency. The authors provide extensive experimental results showing that their method outperforms several state-of-the-art continual learning methods, particularly in task-agnostic settings. The results are convincing and well-supported by ablation studies. The paper effectively leverages the sparsity of activation channels in deep networks, drawing parallels with neuroscience to justify the approach. This adds a layer of biological plausibility to the method. ## Major Weaknesses While the empirical results are strong, the paper lacks a thorough theoretical analysis of why the proposed method works. For instance, the authors could provide more insights into the conditions under which pathway protection is most effective or how the method scales with the number of tasks. The paper primarily validates the method on relatively small models (ResNet32, ResNet18). It would be beneficial to see how the method performs on larger models, such as those used in modern large-scale applications. Other Comments Or Suggestions: No Questions For Authors: 1. Why is the model limited to the ResNet18 and ResNet32? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Response to W1:** Thank you for the reviewer's reminder. We have supplemented our theoretical derivations. ## 1. Core Theoretical Framework ### 1.1 Shallow Layers (Shared knowledge) **Core:** Minimize weight differences through optimal transport alignment of similar channels 1. Parameter Update Merged shallow weights: $$W_S^{\text{merged}} = (1-\eta)W_S^{\text{old}} + \eta P_S W_S^{\text{new}}$$ where $P_S$ is the permutation matrix maximizing similarity. 2. Difference Bound - Define channel similarity matrix $K_S$ where: $$ P_S = \arg\max_P \text{tr}(P^\top K_S) \quad \text{where} \quad K_S[i,j] = \frac{w_i^\top w_j}{\|w_i\|\|w_j\|}, w_i = W_S^{\text{old}}[:,i], w_j = W_S^{\text{new}}[:,j] $$ - Optimal transport guarantees existence of $\kappa_S = \max K_S[i,j]$: $$\|P_S W_S^{\text{new}} - W_S^{\text{old}}\|_F^2 = 2(1 - \kappa_S)$$ - Therefore: $$\|W_S^{\text{merged}} - W_S^{\text{old}}\| \leq \eta \sqrt{2(1 - \kappa_S)}$$ ### 1.2 Deep Layers (Task-specific isolation): **Core:** Limit inter-task interference through sparse channel separation 1. Channel Overlap Let tasks $t$ and $t'$ share $\epsilon C$ channels where: $$|\mathcal{A}_t \cap \mathcal{A}_{t'}| \leq \epsilon C$$ 2. Probability Bound Using Chernoff bound: $$\mathbb{P}(\text{Overlap} \geq \epsilon C) \leq e^{-\epsilon C/2}$$ 3. **Worst-case Gradient Conflict** With Lipschitz constant $\lambda$ and task separation $\Delta$: $$\| \nabla L_t - \nabla L_{t'} \| \leq \lambda / (\epsilon \Delta^2)$$ **Simplified Form**: When $\Delta \propto 1/\epsilon$ (implied by sparsity): $$ \text{Deep Term} = \lambda \epsilon $$ ### 1.3 Dynamic Error Term Derivation **Core:** Azuma-Hoeffding Inequality for martingales 1. Regret Definition For task sequence with losses $L_t(\theta_t)$: $$R(T) = \sum_{t=1}^T L_t(\theta_t) - \min_\theta \sum_{t=1}^T L_t(\theta)$$ 2. Concentration Bound With probability $1-\delta$: $$R(T) \leq \sqrt{2T \log(1/\delta)}$$ 3. Per-Task Error Setting $\delta = 1/T$: $$\frac{R(T)}{T} = \mathcal{O}\left(\sqrt{\frac{\log T}{T}}\right)$$ ### 1.4 Forgetting Bound Theorem $$\mathcal{F}(T) \leq \eta\sqrt{2(1-\kappa_S)} \, \text{(Shallow)} + \lambda \epsilon \, \text{(Deep)} + O\left(\sqrt{\frac{\log T}{T}}\right)$$ Where: - $\kappa_S = \max \mathbf{K}_S[i,j]$ (Peak shallow similarity) - $\epsilon$ = Channel overlap ratio - $\lambda$ = Task conflict intensity (Lipschitz constant) ### 1.5 Hyperparameter Settings |Parameter|Effect on Forgetting| |-|-| |$\eta$|$\eta \downarrow \Rightarrow \mathcal{F}_S \downarrow$| |$\epsilon$|$\epsilon \downarrow \Rightarrow \mathcal{F}_D \downarrow$| |$\kappa_S$|$\kappa_S \uparrow \Rightarrow \mathcal{F}_S \uparrow$ ## 2. Task Capacity Bound 1. Define regret $R(T) = \sum_{t=1}^T L_t(\theta_t) - L_t(\theta^*)$ 2. Apply Azuma-Hoeffding: $$ \mathbb{P}(R(T) \geq \sqrt{2T \log T}) \leq 1/T $$ 3. Per-task average: $\frac{R(T)}{T} = \mathcal{O}(\sqrt{\log T/T})$ 4. Expected overlap: $\mathbb{E}[X] = \epsilon^2 C$ Concentration: $$ \mathbb{P}(X \geq \epsilon C) \leq e^{-\epsilon^2 C/2}$$ 5. Conflict bound: $\lambda \epsilon C$ 6. Channel combinations: $\binom{C}{\epsilon C} \geq T$ 7. Stirling approximation: $$ \log T \leq C H(\epsilon) $$ where $H(\epsilon)$ is binary entropy 8. Final bound: $$ T_{\max} \leq \frac{C}{2\epsilon} \log(1 + \frac{C}{\epsilon \Delta^2}) $$ ## 3. Experiments We divided Tiny-ImageNet into 100 tasks and compared different approaches. |Method| SPG |SPU|WSN|EWC|LwF|Supsup|Ours |-|-|-|-|-|-|-|- |100 splits|43.40|52.90|60.60|50.61|56.82|49.47|**64.63** **Response to W2:** Thank you for the reviewer's suggestions. - At first, we considered that CL is designed for resource-constrained edge devices (e.g., mobile phones) where models must adapt quickly to new tasks with limited computation/memory. - To address the reviewer's concern. We conducted corresponding experimental tests on the ImageNet-R dataset and also performed tests using ResNet50. **(1) ResNet32-based model on the Imagenet-R** |Method|5 splits|10 splits|20 splits| |-|-|-|-| |EWC|24.6|27.1|29.3 |LwF|25.0|28.6|30.3 |RWalk|26.1|28.4|26.6 |WSN|28.2|30.4|32.1 |SPG|23.2|24.1|22.7 |SPU|26.2|27.6|27.1 |GPM|26.7|27.1|29.3 |Ours|**32.8**|**34.3**|**37.3** **(2) ResNet50-based model on the Cifar100** |Method|5 splits|10 splits|20 splits| |-|-|-|-| |EWC|70.9|62.7|57.6 |LwF|79.2|80.3|82.7 |RWalk|70.3|66.3|62.1 |WSN|80.2|81.4|84.3 |SPG|55.3|57.2|55.1 |SPU|60.5|62.1|61.7 |Ours|**84.4**|**87.5**|**89.7** **Response to Q1:** Thanks for the comments. We would like to explain them as follows: - When the number of tasks grows or data complexity rises, the model requires more channels to mitigate inter-task interference. - ResNet18's deep channels (512) far exceed those of ResNet32 (64), enabling ResNet18 to better meet the demands of sparse allocation in complex task scenarios. We verified that, under the same scenario, the sparser the deeper layers of the network, the greater the performance improvement. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' rebuttal. My concerns have been addressed. I will keep my score and support acceptance. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the positive feedback on our work. We would like to provide further supporting evidence here and kindly hope that the reviewer may consider increasing the score. **1. Comparison between LwI and naive averaging** ##### LwI Bound: $$ \mathcal{F}_{\text{LwI}} \leq \eta \sqrt{2(1 - \kappa_S)} + \lambda \epsilon C + \mathcal{O}(\sqrt{\frac{2 \log T}{T}}) $$ ##### Naive Avg Bound: $$ \mathcal{F}_{\text{avg}} \geq \frac{1}{2}\sqrt{\sum \delta_c^2} + \lambda C + \mathcal{O}(1) $$ - $\mathcal{F}_{\text{avg}}$: Forgetting measure (naive averaging) - $\delta_c$: Channel misalignment distance for filter $c$ When channel alignment quality $\kappa_S > 0.5$ and overlap ratio $\epsilon < 1/C$, the forgetting ratio satisfies: $$ \frac{F_{\text{LwI}}}{F_{\text{avg}}} \approx \sqrt{2(1-\kappa_S)} \, \text{(alignment gain)} \cdot \epsilon \, \text{(sparsity gain)} + O(T^{-1/2}) $$ 1. Our method provides an **upper bound** on forgetting 2. Naive averaging only provides a **lower bound** 3. Experiments |Method|5 splits|10 splits|20 splits |-|-|-|- |Naive Avg|72.13 ± 0.98|74.80 ± 1.65|75.30 ± 1.27 |Ours|**81.10 ± 0.80**|**84.90 ± 0.36**|**86.49 ± 0.55** **2. Improvements from the soft matching algorithm.** We obtain the matrix $K$, which represents the ground cost of moving the neurons from the $l$-th layer of model1 to the $l$-th layer of model2. The Sinkhorn algorithm is then used to solve for the corresponding transport matrix: - Entropy regularization is applied to achieve the soft matching process: $S = exp(-K/\epsilon)$,where $\epsilon$ is the entropy regularization parameter. - The iterative process: - Step1: $P = P*(\frac{\mu}{\sum{SP}})^\text{T},$ by iterating with row constraints, we obtain the corresponding $P$, where $(·)^\text{T}$ represents the transpose, and $\mu$ denotes the probability distribution of the neurons in model1, $P_{ij} \leftarrow \frac{P_{ij}}{\sum_j P_{ij}}(\text{the sum of each row is 1})$. - Step2: $P = P*(\frac{\nu}{\sum{S^\text{T}P}})^\text{T},$ where the updated matrix $P$ is obtained by iterating with column constraints,$\nu$ represents the probability distribution of the neurons in model2, and the rule $P_{ij} \leftarrow \frac{P_{ij}}{\sum_i P_{ij}}(\text{the sum of each column is 1})$ ensures that the sum of each column in $P$ equals 1. By iteratively applying the above steps—step 1 for row constraints and step 2 for column constraints—we ultimately obtain the corresponding soft permutation matrix $P$. - We compared fusion using hard matching (Ours with OT), and the results show that soft matching is more advantageous. ResNet18-based model on the CIFAR-100 dataset, measured under task-aware scenarios. |Method|5 splits|10 splits|20 splits |-|-|-|- |Ours with OT|75.73±0.58|77.73±0.76|80.60±0.71 |**Ours**|**81.10±0.80**|**84.90±0.36**|**86.49±0.55** **3. Optimization of algorithmic complexity** - We conducted a corresponding analysis, including the optimization of our proposed method in terms of time complexity. In the context of our hierarchical matching approach, we analyze its time complexity as follows. Given a deep network with $N_L$ layers, each containing $C$ channels, traditional graph matching incurs a time complexity of O(N^4), where $N$ denotes the total number of nodes in the graph (All neurons in the neural network). However, by employing a hierarchical matching strategy for deep networks, we can compute the time complexity separately for each layer and then aggregate the results. Consequently, the overall time complexity of our approach is: $$ \begin{split} & O(\sum_{1}^{N_L} C^4) = O(\sum_{1}^{N_L} (\frac{N}{N_L})^4) = O(\frac{1}{N_L^3} N^4). \end{split} $$ - Although graph matching is generally an NP-hard problem, using bilateral relaxation transforms it into a solvable problem by converting the general graph matching problem into a bipartite graph matching problem. The original graph matching problem may involve graphs with arbitrary topological structures, but bipartite graph matching is relatively easier to solve computationally. In this paper, we use the Sinkhorn algorithm, which is a polynomial-time algorithm.
null
null
null
null
null
null
Leveraging Randomness in Model and Data Partitioning for Privacy Amplification
Accept (poster)
Summary: The submission analyzes privacy amplification for Renyi DP in two settings where $N$ (sets of) records are independently assigned to $k$ out of $d$ components of DP-SGD: (1) Data partitioning, where each record from a dataset of size $N$ contributes to $k$ out of $d$ gradient steps ("balanced iteration subsampling"), and (2) model partitioning, where all records associated with one of $N$ clients are used to update $k=1$ out of $d$ subsets of model parameters. The authors first present a general upper bound on the Renyi divergence on the Gaussian mixture distributions that arise from such random partitioning schemes. A proof for this main theorem is provided at the end of the methods section. The main objective of this proof is to determine sound upper bounds that eliminate computationally intractable multinomial terms (forward divergence) or eliminate mixture densitities from the denominator in the Renyi divergence integral (reverse divergence). For model partitioning, the general bound is first instantiated for disjoint partitionings of the model parameters. It is then generalized for non-disjoint partitionings by composing privacy guarantees for the intersection and symmetric difference of the parameter subsets. Next, the disjoint partitioning bound is generalized to probabilistic via joint quasi-convexity of Renyi divergences. Finally, this probabilistic disjoint partitioning bound is applied to the special case of parameter dropout, which partitions models into activated and deactivated neurons. For data partitioning, the general bound is again instantiated to obtain epoch-level privacy guarantees, which are then compared to a composition of iteration-level privacy guarantees for Poisson subsampling. In the experimental section, the authors evaluate the utility of ResNet-101 finetuned on CIFAR-10 with (1) model partitioning and bounds from prior work (2) model partitioning with the proposed bounds and (3) without model partitioning. The results show an improved privacy--utility trade-off for (2) compared to (1), achieving an accuracy closer to (3). Claims And Evidence: The submission makes two central claims: 1. That the randomness inherent to different commonly used model partitioning strategies can be used to provide stronger privacy guarantees. 2. That balanced iteration subsampling can offer stronger privacy amplification than Poisson subsampling for certain parameterizations. Claim 1 is proven by formally deriving corresponding amplification guarantees. Claim 2 is proven by deriving corresponding amplification guarantees and comparing them to tight amplification bounds for Poisson subsampling for different epoch lengths, subsampling rates, and fixed RDP parameter $\alpha$ (See Fig. 2). **Rating: Good** Methods And Evaluation Criteria: ### Methods In the case of model partitioning, the work does not provide new methods as such, but rather derives better privacy guarantees for these methods. The proposed balanced iteration subsampling appears like a reasonable alternative to random shuffling or Poisson sampling. It fulfills its role in demonstrating that amplification-by-subsampling can also be attained by sampling non-i.i.d. lots/batches. However, it is not clear whether this subsampling method is preferable for model training (see "Experimental Designs or Analyses" below). ### Evaluation Criteria Privacy guarantees are compared by plotting (Renyi) privacy profiles, which is common practice. The privacy--utility trade-off attained via the amplification bounds is evaluated by comparing accuracy at a fixed privacy budget, which also makes sense. CIFAR and ResNet are standard choices for such experiments in DP literature. **Rating: Good** Theoretical Claims: I went through the proofs in Section 3.4 and read the proofs of Theorems 3.4, 3.5, 3.7, and 3.8 in detail. I skimmed the other proofs in the appendix. **Overall, the proof strategy appears sound, but I did not not check each individual equation for correctness**. I have a minor concern w.r.t. Corollary 3.7 (Parameter dropout). The proof seems to largely follow that of Theorem 3.5 (Disjoint partitioning). However, Theorem 3.5 assumes that the partitioning is done centrally (see l. 172 in Algorithm 1), whereas dropout is applied independently per client (see l. 228 in Algorithm 2). I believe that the guarantee still holds due to parallel composition (i.e., we only need to apply Theorem 3.5 to the specific client that holds the inserted/removed record), but it would be good if the authors could clarify this. **Rating: Ok, but requires some clarification** Experimental Designs Or Analyses: As stated above, the evaluation in Section 4 is sound. However, the privacy utility trade-off is only investigated for a specific model partitioning scheme and one specific choice of privacy budget, and one random seed. The argument for leveraging the randomness in model partitioning in practice could be strengthtened by also: * Considering a wider range of privacy budgets to show some form of Pareto-domination of the baseline * Also considering fully disjoint model partitioning and random dropout partitioning. * Repeating the experiment with multiple random seeds and reporting standard deviations. Similarly, the evaluation of the RDP profiles in Fig. 3 and Fig. 4 could be repeated for different choices of $T, k, \sigma$ in the appendix to show that the derived bounds are beneficial beyond this very specific choice of parameters. Finally, the privacy--utility trade-off attained by balanced iteration subsampling is not investigated. Thus, it is not clear whether there is any benefit to actually using this new data partitioning scheme for model training in practice. **Rating: Ok for a theory-focused work, but experimental evaluation could be expanded / be more thorough to show impact on practical applications** Supplementary Material: See "Theoretical Claims" above. Relation To Broader Scientific Literature: On a high-level, this work studies privacy amplification, i.e., leveraging elements of internal randomness that induce mixture distributions. The proposed balanced iteration represents an alternative to commonly used subsampling schemes that sample each batch for a training epoch independently. Similar non-i.i.d. subsampling schemes have already been studied in prior work (shuffling, random check-ins, and balls-and-bins sampling). However, it generalizes them by allowing each record to contribute to multiple training steps. I.e., this aspect is somewhat novel. It seems like analyzing amplification by model partitioning is completely novel, but I am not sufficiently familiar with this side of the literature to make a definite statement about its novelty. The privacy analysis is conducted in the framework of Renyi differential privacy / moments accounting. This enables very simple privacy accounting for subsampled mechanisms, but overestimates privacy when converted to approximate DP. However, this framework has been superseded by other numerical and analytical (e.g., Fast Fourier Accounting (Koskela et al, 2019) and Analytical Fourier Accounting (Zhu et al., 2022)) approaches. Essential References Not Discussed: Balanced iteration subsampling with $k=1$ has already been proposed in "Balls-and-Bins Sampling for DP-SGD" by Chua et al. (December 2024). However, this prior work appeared just one month before the submission deadline, so I do not think that the authors necessarily need to discuss it. Other Strengths And Weaknesses: ### Other Strengths * The paper is overall well-written and structured. Especially the main section which first states the main result, then applies it to different use cases, and finally discusses the technical details of its derivation. * The work clearly states a research question (paragraph 2 of Section 1), which is good. * The authors are transparent about limitations / avenues towards deriving tighter bounds (e.g., by leveraging the randomness of probabilistic model partitioning in Theorem 3.5) ### Other Weaknesses * The work uses RDP, which overestimates privacy leakage when converted to approximate differential privacy * The caption of Fig. 2 states "With different values of $\alpha$, the graph shows a similar pattern." This should be substantiated by actually plotting the graph for different values of $\alpha$ in the appendix. * The authors do not specify for which neighboring relation they derive their bounds. I assume it's insertion/removal of a single record (of a single client), but it would be good to clarify this. * In Fig. 1 "No amplification" and Section 4 "baseline", it is not clear how exactly the baseline privacy guarantes are computed. It would be better to clarify this in the appendix. Other Comments Or Suggestions: ### Comments Theorem 3.4 states that the amplification guarantee holds for any disjoint model splitting method. However, the proof only considers deterministic splitting. Probabilistic splitting is only analyzed later in Theorem 3.5. You may want to add a quantifier: "Each iteration of Algorithm 1 with a >deterministic< disjoint model splitting method [...]". ### Conclusion Overall, the submission makes substantial, well-presented contributions in two different directions: Privacy accounting for non-i.i.d. subsampling and leveraging internal randomness due to model partitioning. Since model partitioning is already used for computational reasons, the latter essentially provides additional privacy for free. As such, the work could potentially have high impact on federated learning literature. The main Theorem appears very general and may thus also enable the analysis of various other non-i.i.d. subsampling schemes. Thus, there may also be some impact on differentially private ML in the centralized setting. My main concern is that the experimental evaluation is not sufficiently thorough to make any definite statements about how large the benefit of using these new sources of randomness actually is in practical applications. However, since the main contribution lies in the conducted theoretical analysis, I nevertheless recommend acceptance. ## Update after Rebuttal The authors have addressed most of my comments. I continue to recommend acceptance, see details in rebuttal response below. Questions For Authors: * At the bottom of Fig. 2, there is a very thin green line. Does this mean that balanced iteration subsampling is better for $\gamma \to 0$, or is this a bug in the plotting code? * Could you please explain why Corollary 3.7 follows from Theorem 3.5, even though the model splitting is now done per client? (See "Theoretical Claims" above). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed review of our paper and their valuable feedback. Our paper is indeed the first to point out and quantify the privacy gain from model parallelism techniques already employed in federated learning. Following the reviewer's suggestions, we have done more experiments for both centralized and federated settings. In the centralized setting with sample-level $(8, 10^{-5})$-DP, training ResNet101 with 3 submodels under the standard DP-SGD analysis achieves 79.80\% accuracy and using our analysis accuracy increases to 82.43\% (this is because our analysis accounts for the amplification gain for model splitting, allowing us to add less noise while still achieving the same DP guarantee); 8 submodels under the standard DP-SGD analysis has 76.80\% accuracy and using our analysis accuracy increases to 80.52\%. In federated setting with user-level $(8, 10^{-5})$-DP, 3 submodels with the standard analysis has 78.47\% accuracy and using our analysis accuracy increases to 80.28\%; 8 submodels with the standard analysis has 76.96\% accuracy, and using our analysis accuracy increases to 79.11\%. The experiments are run with 3 random seeds and all have standard deviations of around 0.7\%. The design choice to not partition the first and last layers of ResNet-101 is guided by insights from the model splitting literature (e.g. Dun et. al.). They argue that since these layers pick up important features, partitioning them leads to degraded accuracy. Following the guidance from this literature, we experimented with partial and not fully disjoint model splitting. As for the comparison between Balanced Iteration Subsampling and Poisson Subsampling, we would like to emphasize a point which we will make more clear in the revision. Balanced Iteration and Poisson Subsampling achieve similar privacy-accuracy trade-offs experimentally. This is because standard training includes a large number of iterations, and for large numbers of iterations, both the training dynamic and privacy guarantees of the two become comparable as pointed out in the paper. The main advantage of Balanced Iteration Subsampling is that it is more practical and deployment-friendly especially in the federated setting. Poisson Subsampling has implementation-related drawbacks as it gives a variable load to each client, which can overwhelm the resources of the client and undermine fair use policies. The DP community has been interested in studying other (more deployment-friendly) data subsampling techniques whose utility and privacy guarantees are comparable to Poisson Subsampling, for example random shuffling and random check-ins (Balle et al 2020). Our paper fills this gap by showing that this is indeed the case for Balanced Iteration Subsampling. As for the actual experiments, for CIFAR-10 with WideResNet-40-4, $(8, 10^{-5})$-DP, 2000 iterations, using each sample 655 times for Balanced Iteration Subsampling and with probability $\frac{655}{2000}$ in each iteration for Poisson Subsampling, Balanced Iteration Subsampling injects noise with $\sigma = 10.17$ and achieves validation accuracy of $70.21\%$ (with standard deviation $0.69\%$), while Poisson Subsampling injects noise with $\sigma = 10.20$ and achieves validation accuracy of $70.13\%$ (with standard deviation $0.78\%$). ResNet-101 gives similar results. We will include these experiments and show more graphs to compare the privacy guarantees of the two subsampling methods in the revision. In Figure 2, the thin line at $\gamma \approx 0$ is due to discretization of $\gamma$ values and the code arbitrarily ruling in favor of Balanced Iteration Subsampling when $\gamma = 0$. On the concern about Corollary 3.7, we would like to clarify that the privacy gain we utilize comes from the random (and independent) assignment of submodels to the clients. So, although in Algorithm 1, model splitting in done centrally, each client independently receives a submodel. Algorithm 2's parameter dropout can be seen as forming $2^m$ submodels (where $m$ is the number of parameters subject to dropout), and each client independently receives a submodel, so in that perspective, dropout is fundamentally no different, and thus it is a corollary that follows. The dataset neighboring relationship is indeed add-/remove-one, which was briefly mentioned in Section 2. --- Rebuttal Comment 1.1: Comment: Thank you. This, combined with the explanation of the "baseline" in the other rebuttals addresses most of my concerns, particularly w.r.t. experimental evaluation of the bounds and the resultant privacy--utility trade-off. --- I just have one more question w.r.t. dropout: Theorem 3.4 assumes that we have a disjoint partitioning into submodels. Theorem 3.5 assumes that we sample a disjoint partitioning. Given such a partitioning, each client is assumed to independently sample one of the disjoint submodels. However, many of the $2^m$ submodels will be non-disjoint. Could you maybe clarify why Theorem 3.4/3.5 still applies? Can this per-client-dropout somehow be written as a probabilistic mixture of disjoint partitionings? --- Overall, I still think that this is a good submission that extends privacy amplification into an interesting, novel direction (using other sources of randomness than shuffling / i.i.d. sampling of batches for amplification) and continue to recommend acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for following up and clarifying the original concern. The answer is in the proof of Corollary 3.7 in the appendix, but we can give an overview here. Dropout with rate $0.5$ can indeed be seen as a probabilistic mixture of disjoint partitionings. For any subnet $W_J$ formed by dropout, the complementary subnet $W_J^c$ (i.e., the subnet formed by the parameters not in $W_J$) has the same probability to be chosen as $W_J$ because we require dropout with probability $0.5$ (both $W_J$ and $W_J^c$ have probability $2^{-m}$ where $m$ is the number of parameters subject to dropout). This forms two disjoint submodels, so in Corollary 3.7, $d=2$. Dropout is then a probabilistic mixture of these over all potential $W_J$'s. Remark 3.6 also applies here, i.e., there may be a tighter way to leverage the randomness in dropout, but it is still a significant improvement over ignoring randomness entirely and defaulting to the standard DP-SGD analysis.
Summary: The paper shows how Renyi DP guarantees can be amplified under two different kinds of data sub-sampling and partitioning strategies. The first is where all data points are used for the same number of iterations but in randomly distributed steps. The second is where different parts of the model are updated with randomly chosen data samples. In each case, the paper shows when the amplification is larger than what was previously known by existing bounds. Claims And Evidence: Yes, I found most claims to be well-managed in terms of what is claimed and what is shown. The following cases were a bit problematic for me though it was not overclaiming per se - 1. That the amplification with model partitioning can be helpful for federated and distributed learning- (see Line 074 left column), however there were no experimental setups proposed to show this. Given that the benefit large depends on the exact problem parameters it is hard to justify if this indeed results in a benefit and is not just a theoretical construct. 2. Similarly for Balanced iteration subsampling, the introduction promised that balanced subsample iteration can overcome the issue of disparate sampling in poisson subsampling and can be used in both centralised and federated learning, but there wasn't an experimental validation of this (and the regime where advantage is shown to exist in Fig 2 and 3 were very limited) Methods And Evaluation Criteria: Evaluation is severley lacking. See above for claims that would be good to verify. Theoretical Claims: I have not checked the proofs but the theorem statements are very well written and the proof sketches are well motivated. So I do not doubt that they are correct (or can be correct in case there are typos or mistakes) Experimental Designs Or Analyses: This is severely lacking as I have mentioned above. Supplementary Material: I have not checked. Relation To Broader Scientific Literature: Well done. Essential References Not Discussed: Well done. Other Strengths And Weaknesses: 1. The paper is very well written and with one reading it was very clear to me - what the gap in the literature is, how the authors are solving it, what the theoretical statements and roughly how they are proven. 2. The technical novelty is not particularly large but it is still interesting in terms of the results that are proven. The problem setting is also itneresting. 3. There are three major weaknesses. i. The theoretical regimens of improvement, at least what is evident from the paper, is very limited. For example, if I have interpreted Line 270 to 274 correctly (correct me if I am wrong) and if I make a parallel to classical non-private training, $\gamma$ should be thought of as $1/\mathrm{batch size}$ which is the RV indicating what fraction of iterations have that particular example. Now this is of the order of $1/256\ll 0.005$. In this regime Poisson subsampling seems to be better. (I understand that the regime where the binomial heavily concentrates around its mean is where poisson subsampling should be better and this is precisely the case above but also this is the common regime) ii) The paper seems to be incomplete in the sense that several things could be improved ( also highlighted by the author). For example, Theorem 3.5 only uses one of the two sources of randomness, Line 196 also seems to be a very loose way of going about it, and Line 305. iii) Pending the above and the limited theoretical contribution, I am not viewing its theoretical contribution to be large enough to exempt the need for sufficient experimental evidence to validate its contribution. In this space, the paper's results are very limited. Other Comments Or Suggestions: Update: I have updated my score to weak accept. Questions For Authors: Please address the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their review and their valuable feedback. We will incorporate their comments in our revised paper. Following the reviewer's suggestions, we have done additional experiments for both centralized and federated settings. In the centralized setting, training ResNet101 using 3 submodels achieves 79.80\% accuracy with sample-level $(8, 10^{-5})$-DP guarantee for the final model under the standard DP-SGD analysis, and accuracy increases to 82.43\% for the same privacy guarantee using our analysis (this is because our analysis accounts for the amplification gain for model splitting, allowing us to add less noise while still achieving the same DP guarantee); training ResNet101 with 8 submodels has 76.80\% accuracy under the standard DP-SGD analysis and using our analysis accuracy increases to 80.52\%. In the federated setting, training ResNet101 using 3 submodels achieves 78.47\% accuracy and user-level $(8, 10^{-5})$-DP under the standard analysis, using our analysis accuracy increases to 80.28\% under the same privacy guarantee; 8 submodels with the standard analysis has 76.96\% accuracy, and using our analysis increases the accuracy to 79.11\%. We would like to clarify the main concern regarding the comparison between Balanced Iteration Subsampling and Poisson Subsampling. Even though we show that Balanced Iteration Subsampling can have better privacy guarantees in some regimes, as the reviewer points out Balanced Iteration and Poisson Subsampling achieve similar privacy-accuracy trade-offs experimentally. This is because standard training includes a large number of iterations, so both the training dynamic and privacy guarantees of the two become comparable (see experimental results below). The main advantage of Balanced Iteration is that it is more practical and deployment-friendly especially in the federated setting. Poisson Subsampling has implementation-related drawbacks as it gives a variable load to each client, which can overwhelm the resources of the client and undermine fair use policies. The DP community has been interested in studying other (more deployment-friendly) data subsampling techniques whose utility and privacy guarantees are comparable to Poisson Subsampling, for example random shuffling and random check-ins (Balle et al 2020). Our paper fills this gap by showing that this is indeed the case for Balanced Iteration Subsampling. As for the actual experiments, for CIFAR-10 with WideResNet-40-4, $(8, 10^{-5})$-DP, 2000 iterations, using each sample 655 times for Balanced Iteration Subsampling and with probability $\frac{655}{2000}$ in each iteration for Poisson Subsampling, Balanced Iteration Subsampling injects noise with $\sigma = 10.17$ and achieves validation accuracy of $70.21\%$ (with standard deviation $0.69\%$), while Poisson Subsampling injects noise with $\sigma = 10.20$ and achieves validation accuracy of $70.13\%$ (with standard deviation $0.78\%$). ResNet-101 gives similar results. We will include these experiments in the revision. On the second concern, Theorem 3.5 and Corollary 3.7 indeed utilize only one of the two sources of randomness for amplification because different training dynamics require different privacy analyses. While our analysis does not fully capture all sources of randomness in the setting of Thm 3.5 (this would require an extension of our current analysis), we hope the reviewer will appreciate that we demonstrate a significant improvement over ignoring randomness entirely and defaulting to the standard DP-SGD analysis. We hope our paper will inspire the DP community to further explore such inherent gains and develop stronger mathematical tools for their accounting. This remains a promising direction for future work. Regarding Line 196, which proposes using two clipping norms—one for the parts of the model with model splitting and another for the parts without—it is, in our view, the only viable approach. If a single clipping norm were used, the worst-case $\epsilon$ would occur when the gradient vector concentrates all its power on the part of the model without model splitting. In that scenario, there would be no privacy amplification at all. Therefore, two separate clipping norms appear necessary to the authors. On the third point, our paper is not merely about introducing a theoretical tool to quantify privacy in specific cases; rather, we view the broader contribution of our paper as highlighting an important but previously overlooked insight —that inherent randomness in the training dynamics, such as model parallelism techniques already used in centralized and federated learning, can be leveraged for privacy amplification, at no additional cost. To our knowledge, this is the first work to identify and quantify such an amplification gain. Beyond the specific analysis presented, we aim to bring this perspective to the DP community and encourage further exploration of how different training dynamics yield privacy gains.
Summary: The paper explores how inherent randomness in machine learning training can be used for privacy amplification, specifically model partitioning and data subsampling. These methods can potentially enhance the training privacy without adding excessive noise. Claims And Evidence: It is somewhat unclear whether the study proposes an amplified privacy scheme or a tighter privacy analysis. The current claim leans towards the former, but it lacks a thorough privacy-utility tradeoff analysis and convergence analysis. This leaves open the question of whether the proposed stronger privacy comes at the cost of worse utility or convergence rate compared to the canonical DP-SGD method. A finer concern is the intuition behind the proposed balanced data subsampling. As it fixes the number of iterations each sample appears in, the proposed balanced sampling introduces less randomness than Poisson sampling. Does this balanced sampling improve only the worst-case privacy, or does it also improve the average privacy across all samples? Methods And Evaluation Criteria: The proposed methods including model and data partitioning are relevant and make sense for privacy amplification in ML model training. Theoretical Claims: I checked the correctness of the proof outline in section 3.4. Experimental Designs Or Analyses: The current experiments do not address the comparison of the privacy-utility tradeoff between the proposed techniques and the existing DP-SGD training pipeline. Further experiments are needed to address how the privacy amplification methods affect the training performance, especially in convergence and model utility. In addition, for the balanced sampling scheme, the experiment setup should elaborate on whether it evaluates the average or worst-case privacy of the proposed and baseline sampling methods, and how. Supplementary Material: I reviewed the supplementary A.10 and A.12. Relation To Broader Scientific Literature: The key contributions are closely related to prior works on DP, particularly in the context of model parallelism and randomized training processes. The authors build upon existing techniques such as model partitioning and Poisson subsampling to propose a novel approach that leverages randomness inherent in the training process for privacy amplification. Essential References Not Discussed: The related works are properly cited. Other Strengths And Weaknesses: The reviewer suggests clearly stating the baseline privacy analysis for a more direct comparison. It is currently only represented by a single line in the experimental figures, with no rigorous equations or specific experimental setups. It would be helpful to clearly state the baseline privacy guarantee, and mathematically compare it with the proposed Theorems 3.4, 3.5, and 3.8 to directly demonstrate the advantages of the proposed schemes. The manuscript would benefit from better organization, particularly in making the theoretical analysis more intuitive. The proof of Theorem 3.1 could be summarized as a helper lemma in the main text, with full details in the appendix. Each model splitting and data sampling method should have a clear explanation of how it corresponds to Helper Lemma 3.1. For example, the connection between model splitting and the mixed Gaussian distribution in Theorem 3.1 should be explicitly stated. While Remark 3.3 touches on this, it lacks rigor and comprehensiveness. Other Comments Or Suggestions: The reviewer suggests adding more experiments with different model structures, training datasets and training hyperparameters. Questions For Authors: - Please clarify whether the paper proposes a tighter privacy analysis or a better privacy scheme, and add a comparison to baselines in terms of mode utility and training convergence (see Claims And Evidence and Experimental Designs Or Analyses). - Please detail the baseline privacy analysis including theoretical formulations and experimental setups (see Other Strengths And Weaknesses). - Please clarify whether the proposed balanced sampling scheme improves worst-case privacy or average privacy (see Claims And Evidence). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their review and valuable feedback. We would like to clarify their main question about "whether the proposed stronger privacy comes at the cost of worse utility or convergence rate compared to the canonical DP-SGD method." The main contribution of our paper is to develop a mathematical analysis that is able to quantify the privacy gain that comes from various sources of randomness that share a common structure. Model splitting is already used in the literature both in the centralized and federated settings. We are not proposing a new technique here; we are pointing out that model splitting has a free privacy gain which has remained unnoticed in the prior literature (which requires nontrivial analysis to quantify). Since we are not changing the training, same utility and convergence rates can now be achieved with better privacy guarantees. Conversely, for the same privacy, we can achieve better utility and convergence rates than canonical DP-SGD since we need to inject less noise (see experimental results below). Balanced Iteration Subsampling is indeed a new subsampling method we propose. Balanced Iteration Subsampling is more deployment-friendly that Poisson subsampling, which has implementation-related drawbacks as it gives a variable load to each client. This can overwhelm the resources of the client and undermine fair use policies. The DP community has been interested in studying other (more deployment-friendly) data subsampling techniques whose utility and privacy guarantees are comparable to Poisson Subsampling, for example random shuffling and random check-ins (Balle et al). Our paper fills this gap by showing that Balanced Iteration Subsampling has comparable performance to Poisson subsampling (theoretically we can prove slightly better privacy guarantees for Balanced Iteration Subsampling, however in experiments we observe that Balanced Iteration Subsampling and Poisson subsampling have similar performance, see below for details). We will revise our paper to make these points more clear. Regarding the question about whether we use "worst-case privacy, or average privacy across all samples", we would like to point out that we always use the standard definitions of $(\epsilon, \delta)$-DP and $(\alpha, \epsilon)$-RDP in the literature. The definition of $(\epsilon, \delta)$-DP can be intuitively thought as the privacy leak in the worst $\delta$-fraction of samples, so in that sense it makes intuitive sense that Poisson subsampling can have worse $(\epsilon, \delta)$-DP than Balanced Iteration Subsampling. The baseline for our model splitting results is the canonical DP-SGD analysis as pointed out by the reviewer. For one iteration, the canonical DP-SGD analysis would give $\epsilon = \frac{\alpha c^2}{2\sigma^2}$ RDP in contrast to the RDP bound in Theorem 3.1 (this expression for $\epsilon$ does not have a dependence on the number of submodels because the canonical analysis does not leverage the privacy amplification gain due to model splitting, which is the contribution of our paper). Figure 1 presents a visual comparison of our analysis and the standard analysis. We will make this point clear in the revision. To put this into perspective, for $(\epsilon, \delta)$-DP with $\delta = 10^{-5}$, 1200 training iterations, data subsampling rate of 0.1, a noise standard deviation of 2, our analysis can guarantee $\epsilon = [3.5, 4.0, 5.0, 7.4]$ for $[8, 6, 4, 2]$ submodels, while without using our analysis, the best guarantee is $\epsilon = 11.0$ for all numbers of submodels. To give another example, with $\sigma = 6$, our analysis guarantees $\epsilon = [1.0, 1.2, 1.5, 2.1]$ while standard analysis gives $\epsilon = 3.0$. We will add these as graphs to the revised version. Note that because training remains the same and we only change the privacy analysis, utility remains the same in both cases. Conversely, we can fix the same $(\epsilon, \delta)$-DP budget and compare utility. We have done such experiments for both centralized and federated settings. In the centralized setting with sample-level $(8, 10^{-5})$-DP, training ResNet101 with 3 submodels under the standard DP-SGD analysis achieves 79.80\% accuracy and using our analysis accuracy increases to 82.43\% (this is because our analysis accounts for the amplification gain for model splitting, allowing us to add less noise while still achieving the same DP guarantee); 8 submodels under the standard DP-SGD analysis has 76.80\% accuracy and using our analysis accuracy increases to 80.52\%. In federated setting with user-level $(8, 10^{-5})$-DP, 3 submodels with the standard analysis has 78.47\% accuracy and using our analysis accuracy increases to 80.28\%; 8 submodels with the standard analysis has 76.96\% accuracy has using our analysis increases it to 79.11\%. We will add these experiments to the revised version of the paper, and incorporate reviewer's suggestion about the restructuring of the results. --- Rebuttal Comment 1.1: Comment: The clarification that Balanced Iteration Subsampling is primarily deployment-friendly rather than offering a stronger privacy guarantee addresses my main concern. This distinction from the model splitting part, which provides a nontrivial privacy gain, should be clearly stated in the revision. With the above clarification and the additional experiments, I would consider a weak accept. --- Reply to Comment 1.1.1: Comment: Thank you for reading our rebuttal and updating your review!
Summary: The paper proposes a unified privacy analysis for the applications of model and data partitions. The crucial theorem as shown in Theorem 3.1, which is novel and non-trivial to my best knowledge, states the renyi-divergence between a Gaussian distribution and a mixture of Gaussians. Upon to this theorem, the privacy analysis for applications such as model splitting, dropout, and balanced iteration subsampling, can be amplified accordingly. The paper empirically compares the proposed privacy analysis and the analysis in the literature in their experiment. Claims And Evidence: The claims and evidence are mostly convincing to me. The only place that can be more supported is a straightforward comparison between the proposed analysis and the analysis in the literature. In the application of balanced iteration subsampling, the paper states the privacy guarantee of the analysis in the literature. However, a similar statement for the analysis in the literature is not shown for applications in Section 3.2. Moreover, the numerical comparison is only conducted with one choice of $\sigma$. Methods And Evaluation Criteria: The algorithms and the theoretical results make sense. Theoretical Claims: I have checked the proof of the theorems for the applications (Theorem 3.4 - 3.8), but have not checked the exact proof of Theorem 3.1 (in appendix). Experimental Designs Or Analyses: 1. The empirical evaluation is conducted only with one dataset and model. As the comparison between the analysis in the literature and the proposed literature is not consistent for all cases (of hyperparameter configurations), empirical evaluation is necessary to show the effectiveness of the proposed analysis. 2. Only one application, model partitioning, is evaluated in the experiment section. Other applications, such as dropout and balanced iteration subsampling, are not evaluated. Supplementary Material: I checked the proof for Theorem 3.4-3.8 in the appendix. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: [1] seems to also study the mixture of Gaussian for Renyi-DP, while it specifically calculates the single-dimensional situation. [1] Mironov, Ilya, Kunal Talwar, and Li Zhang. "R\'enyi differential privacy of the sampled gaussian mechanism." arXiv preprint arXiv:1908.10530 (2019). Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: The motivation statements (line 158-164) in Section 3.2.2 are not clear. For the blocks that are sensitive to pruning, why is it better to not partition them? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their review and their valuable feedback. We would like to clarify their main concern regarding "a straightforward comparison between the proposed analysis and the analysis in the literature" for the model splitting methods in Section 3.2. We note that our paper is the first one to point out that model splitting has an inherent privacy amplification gain and to quantify this privacy gain. The baseline to compare this is the standard RDP analysis for DP-SGD which does not take this amplification gain into account since it has remained unnoticed and unquantified in the literature. For one iteration, the standard analysis in the literature would give $\epsilon = \frac{\alpha c^2}{2\sigma^2}$ in contrast to the RDP bound in Theorem 3.1 (this expression for $\epsilon$ does not have a dependence on the number of submodels, because again it does not take into account the fact that training uses model splitting). Figure 1 presents a visual comparison of our analysis and the aforementioned standard analysis. We will make this point clear in the revision. To put this into perspective, for $(\epsilon, \delta)$-DP with $\delta = 10^{-5}$, 1200 training iterations, data subsampling rate of 0.1, a noise standard deviation of 2, our analysis can guarantee $\epsilon = [3.5, 4.0, 5.0, 7.4]$ for $[8, 6, 4, 2]$ submodels, while without using our analysis, the best guarantee is $\epsilon = 11.0$ for all numbers of submodels. To give another example, with $\sigma = 6$, our analysis guarantees $\epsilon = [1.0, 1.2, 1.5, 2.1]$ while standard analysis gives $\epsilon = 3.0$. We will add these as graphs to the revised version. Conversely, we can fix the same $(\epsilon, \delta)$-DP for the final model and compare accuracy. We have done such experiments for both centralized and federated settings. In the centralized setting with sample-level $(8, 10^{-5})$-DP, training ResNet101 with 3 submodels under the standard DP-SGD analysis achieves 79.80\% accuracy and using our analysis accuracy increases to 82.43\% (this is because our analysis accounts for the amplification gain for model splitting, allowing us to add less noise while still achieving the same DP guarantee); 8 submodels under the standard DP-SGD analysis has 76.80\% accuracy and using our analysis accuracy increases to 80.52\%. In federated setting with user-level $(8, 10^{-5})$-DP, 3 submodels with the standard analysis has 78.47\% accuracy and using our analysis accuracy increases to 80.28\%; 8 submodels with the standard analysis has 76.96\% accuracy has using our analysis increases it to 79.11\%. As for the comparison between Balanced Iteration Subsampling and Poisson Subsampling, we would like to emphasize a point which we will make more clear in the revision. Balanced Iteration and Poisson Subsampling achieve similar privacy-accuracy trade-offs experimentally. This is because standard training includes a large number of iterations, and for large numbers of iterations, both the training dynamic and privacy guarantees of the two become comparable as noted in the paper. The main advantage of Balanced Iteration Subsampling is that it is more practical and deployment-friendly especially in the federated setting. Poisson Subsampling has implementation-related drawbacks as it gives a variable load to each client, which can overwhelm the resources of the client and undermine fair use policies. The DP community has been interested in studying other (more deployment-friendly) data subsampling techniques whose utility and privacy guarantees are comparable to Poisson Subsampling, for example random shuffling and random check-ins (Balle et al 2020). Our paper fills this gap by showing that this is indeed the case for Balanced Iteration Subsampling. As for the actual experiments, for CIFAR-10 with WideResNet-40-4, $(8, 10^{-5})$-DP, 2000 iterations, using each sample 655 times for Balanced Iteration Subsampling and with probability $\frac{655}{2000}$ in each iteration for Poisson Subsampling, Balanced Iteration Subsampling injects noise with $\sigma = 10.17$ and achieves validation accuracy of $70.21\%$ (with standard deviation $0.69\%$), while Poisson Subsampling injects noise with $\sigma = 10.20$ and achieves validation accuracy of $70.13\%$ (with standard deviation $0.78\%$). ResNet-101 gives similar results. We will include these experiments in the revision. While we are aware of the paper by (Mironov et. al.) which also considers Poisson subsampling with the Gaussian mechanism; we use the stronger analysis in (Zhu \& Wang) for Poisson subsampling. Lastly, the design choice to not partition the first and last layers of ResNet-101 is guided by insights from the model splitting literature (e.g. Dun et. al.). They argue that since these layers pick up important features, partitioning them leads to degraded accuracy. Following the guidance from this literature, we experimented with partial and not fully disjoint model splitting.
null
null
null
null
null
null
Near Optimal Best Arm Identification for Clustered Bandits
Accept (poster)
Summary: This paper introduces a Best Arm Identification problem with clustering structures. Specifically, given $N$ agents and $M$ bandits instances (Usually $N>M$), each agent faces one of the bandit instance, but the mapping is unknown. The goal is to identify the best arm for each agent with probability at least $1-\delta$. By making use of the clustering structure among the agents, this paper wishes to avoid repeated best arm identification within the same bandit instance, as the agents facing the same instance share the same best arm. Two algorithms (with an improved one) are proposed, CI-BAI and BAI-CI(++), which cluster the arms before and after the best arm identification process, respectively. Theoretical guarantees on the sample complexity are provided and a minimax lower bound is also provided. Experiments are conducted on synthetic and real datasets. Claims And Evidence: The authors provide proofs for the theorems in the main paper. I skimmed through the proofs and it looks reasonable to me. Methods And Evaluation Criteria: This paper makes extensive use of the Successive Elimination algorithm, a commonly used method in Best Arm Identification. It is employed to facilitate agent clustering and to identify the best arm for each bandit instance. Theoretical Claims: I skimmed through the proofs for the theoretical guarantees. They appear reasonable to me. I did not check every detail. Experimental Designs Or Analyses: For the synthetic dataset, it serves as a sanity check for the proposed algorithms. The experimental results align well with the remarks in the preceding sections. For the real dataset, while the experimental results appear reasonable, I am not entirely convinced by the choice of \eta. Assumption 2.1 is crucial throughout the paper, as the value of \eta directly impacts the advantage of the proposed algorithms over the naive approach. However, the choice of $\eta$ (and $\eta_1$) seems highly instance-dependent. For example, in the MovieLens dataset, $\eta = 0.0027$, whereas in the Yelp dataset, it is significantly larger at $\eta = 0.375$. Additionally, the authors verify that these datasets satisfy Assumptions 2.1 and 6.1 using ground truth instance parameters, which are not accessible in real-world scenarios. To address this concern, I recommend that the authors conduct experiments to evaluate the robustness of the proposed algorithms when Assumption 2.1 does not hold or when $\eta$ is misspecified. Supplementary Material: I have gone through the whole supplementary material, including the proofs and the experiments. The proofs look reasonable but I didn't check every detail. Relation To Broader Scientific Literature: This paper falls within the field of Bandit Algorithms, specifically focusing on Best Arm Identification with fixed confidence. The authors assume the presence of $N$ agents and $M$ bandit instances, along with an unknown mapping between them. An effective algorithm should leverage the clustering structure to minimize redundant identification efforts. Essential References Not Discussed: The references appear well-selected and appropriate. Other Strengths And Weaknesses: **Strengths** 1. The problem formulation is well stated. 2. Upper bounds of the proposed algorithms are presented, as well as a minimax lower bound which indicates the upper bound is tight in some parameters under some instances. 3. The paper also discussed the communication efficiency of the proposed methods, indicating a trade-off between communication cost and identification efficiency. **Weaknesses**: 1. The assumptions are quite strong 1. It assumes different bandit instances have different best arms, which may not be general enough. 2. Assumption 2.1 requires the knowledge of $\eta$, which is crucial for the proposed algorithm. 2. While Line 248 claims that CI-BAI will be better than the naive algorithm, it is not observed in the experiments with real datasets. 3. It would be great if the authors can specify the results for the case for non-uniform clusters, as mentioned in Remark 5.6, since the assumption that the agents are uniformly distributed is restrictive in Theorem 5.3. 4. The paper relies heavily on Successive Elimination, which makes it difficult for me to discern the technical novelty of the proposed method. Other Comments Or Suggestions: 1. Line 274 right: woth high probability -> with high probability. Questions For Authors: 1. This paper assumes the number of bandits $M$ is known at the beginning. Is it possible to remove such knowledge? e.g., in the MovieLens experiment, authors manually classify the users according to the ages and obtain $6$ clusters. Is it possible to remove the knowledge of $6$ and let the algorithm learn this automatically? 2. Can the authors highlight the technical contribution of the proposed method? Ethical Review Concerns: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Knowledge of $\eta$:** We run additional experiments to validate the robustness of our schemes. Please refer to our rebuttal of Reviewer UFPL. **Different bandit instances have Different best arms:** Our objective is to find the best arm for every agent, and so it seems reasonable to *define* the clustering based on best arms. Naturally, we put all the agents in a cluster who possess the same best arm. Furthermore, we are able to at least identify a few real-life datasets where these assumptions do hold. **$\eta-free algorithm:** We can remove the knowledge of $\eta$ from our learning algorithms completely. We can propose a multi-phase algorithm where we start with a large enough value of $\eta$, and at the beginning of each phase, we reduce it by a factor of 2. After some phases, the value of $\eta$ falls below the actual separation and the algorithms start learning the best arm. If we select exponentially increasing phase length, we can show this multi-phase algorithm will succeed in finding the best arms of all the agents. Rigorously proving the correctness is part of our future plans. **Cl-BAI vs Naive in real data:** As explained in the same paragraph (line 234-248), Cl-BAI will out-perform the naive algorithm if the `separability' parameter $\eta$ is much bigger than the bandit sub-optimality (arm) gaps $\bar{\Delta}$; {in particular when $\eta \gg \bar{\Delta}$}. While this is the case in synthetic datasets, for the real datasets we chose, it turns out (based on our sub-sampling of data) that the bandit arm gaps are comparable to the cluster separation (e.g for Yelp, $\eta=0.375$ and $\bar{\Delta} = 0.25$) , and hence the sample complexities are comparable. **Results for non-uniform clusters:** We invoke the results of coupon collector problem with unequal probabilities $p_1,\ldots,p_M$, where $p_i$ is proportional to size of cluster $i$. From [1], the term $\mathcal{O}(M\log M/\delta)$ in Theorem 5.3 will be replaced with the following term: $ \mathcal{O}[\int_0^\infty [1- \prod_{i=1}^M (1-e^{-p_it})] dt]\log(1/\delta). $ **Novelty and reliance on SE** Our goal is to perform both clustering and best arm identification jointly, and SE is a common tool to help in both the jobs. We believe the novelty of this work lies in utilizing SE for both clustering and best arm identification judiciously in a *sample efficient* manner. We choose to run SE with different combinations of the subset of arms, success probability and number of rounds to reduce the sample complexity which is highly non-trivial. The *easy tunability* of SE allows us do do this and obtain *near optimal* performance. To the best of our knowledge, this is not done in the literature yet. This work may be viewed as the first step towards understanding parameter-free clustering in the Federated setup with the objective of finding best arms for all the agents (not *appropriately defined Global best arm*). **Knowledge of $M$:** The number of clusters $M$ is not needed to be known for Cl-BAI, where the clusters are created based on a nearest-neighbor graph type construction. For BAI-Cl and BAI-Cl++, knowledge of $M$ is needed since the first phase corresponds to recovering the set of all possible best arms using random sampling. Here, we do not require the exact value, any upper bound will suffice with a small increase in sample complexity. **Technical Contribution:**  (i) *Successive Elimination (SE):*  We believe the contribution of this work lies in utilizing and tuning SE for both clustering and best arm identification judiciously in a *sample efficient* manner which is highly non-trivial. As a result, we obtain *near optimal* performance in terms of sample complexity. Even though this seems apparently simple, to the best of our knowledge, this is currently absent from the literature. (ii) *Coupon Collector:* We use ideas from the *Coupon Collector* problem to improve the sample complexity of our proposed algorithm, $BAI-Cl$. Using this with SE, we find at least one agent from each cluster with a set of active arms containing the best arms from all $M$ clusters. In the subsequent phase, we let the rest of the agents play only from the obtained subset. Since the cardinality of this set can be much smaller than the total number of arms, we get an improved sample complexity, which is near optimal. (iii) *Lower Bound* We also use instance perturbation based technique to obtain lower bounds on the expected sample complexity. We obtain both instance dependent as well as instance independent bounds. The lower bound we obtain (nearly) matches the sample complexity of our proposed algorithm rendering them near optimal. Such a lower bound is also novel for this multi agent best arm identification problem. (iv) *Experiments* We run experiments on real datasets, Yelp and Movielens dataset. **Reference** [1]  "Birthday paradox, coupon collectors, caching algorithms and self-organizing search", Philippe et.al, Discrete Applied Math, 1992. --- Rebuttal Comment 1.1: Comment: The empirical results provided partially resolve my concern about the robustness of $\eta$. However, (1) the smallest $\eta$ is only half of the ground-truth for each experiment; (2) the largest $\eta$ in Movielens (0.16) is still smaller than the smallest $\eta$ in Yelp (0.187). Due to these setups, the impact of $\eta$ still requires further investigation. In particular, as it is the lower bound on the true $\eta$ that is required as input, running the algorithm with a much smaller $\eta$ is expected. Under such case, it remains unknown whether the algorithm can beat the **parameter-free baselines** or not. Regarding the halving strategy proposed by the authors, it seems to be a possible approach that leads to the $\eta$-free algorithm. However, the design of the stopping rule and the final sample complexity are still unclear. The cumulative sample complexity for this halving strategy may be worse than the parameter-free baselines. As the authors indicate, an $\eta$-free algorithm is of great interest and can greatly enhance this work from my point of view. As the authors have solved my other concerns, I increase my score for the time being. But the authors are **strongly** suggested to conduct further experiments and include the $\eta$-free algorithm in the paper to make the paper complete. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their comments and agree that understanding the impact of $\eta$, as well as providing a rigorous description and analysis of a parameter-free algorithm are important, and require further careful investigation. We have extended our numerical evaluations on the Movielens and Yelp datasets to include further smaller values of $\eta$, as was suggested; the sample complexity results are tabled below. We have the following observations: 1) The naive scheme requires 9.17567403e+09 and 9.2677691e+06 samples for the Movielens and Yelp datasets respectively. Cl-BAI doesn't improve on these; however BAI-Cl does much better even when the assumed value of $\eta$ is much smaller. Intuitively, this is because the first phase of BAI-CL reduces the active arm set size from $K$ to $M$ which provides significant savings in the second phase. 2) Even with much smaller (assumed) values of $\eta$, the sample complexity does not vary too much, and so the results are quite robust in that sense. Movie lens: | \(\eta\) | Cl-BAI (No. of Pulls) | Cl-BAI (Error) | BAI-Cl (No. of Pulls) | BAI-Cl (Error) | |-----------|----------------------|---------------|----------------------|---------------| | 0.16 | 7.9739185e+08 | 10 | 4.27459816e+08 | 10 | | 0.08 | 1.59879063e+09 | 0 | 9.29866052e+08 | 8 | | 0.04 | 3.15616948e+09 | 0 | 6.21368206e+08 | 5 | | 0.02 | 6.33333662e+09 | 0 | 1.02040616e+09 | 0 | | 0.01 | 1.26017946e+10 | 0 | 3.44881983e+09 | 0 | | 0.005 | 1.25831127e+10 | 0 | 3.73592584e+09 | 0 | | 0.0027 | 1.25965021e+10 | 0 | 7.45895904e+09 | 0 | | 0.0015 | 1.26132350e+10 | 0 | 7.61669485e+09 | 0 | | 0.00075 | 1.25863287e+10 | 0 | 7.59916579e+09 | 0 | | 0.000375 | 1.26012931e+10 | 0 | 7.61174240e+09 | 0 | Yelp: | \(\eta\) | Cl-BAI (No. of Pulls) | Cl-BAI (Error) | BAI-Cl (No. of Pulls) | BAI-Cl (Error) | |-----------|----------------------|---------------|----------------------|---------------| | 3 | 10,991,609.8 | 10 | 3,680,129.1 | 9 | | 1.5 | 13,076,497.2 | 10 | 1,117,523.4 | 0 | | 0.75 | 13,118,020.2 | 0 | 1,032,964.9 | 0 | | 0.375 | 13,120,822.0 | 0 | 1,078,600.6 | 0 | | 0.187 | 13,154,100.6 | 0 | 1,001,680.3 | 0 | | 0.093 | 13,114,919.4 | 0 | 1,123,845.5 | 0 | | 0.046 | 13,079,106.6 | 0 | 1,149,175.9 | 0 |
Summary: The paper explores the problem of identifying the best arms in a multi-agent bandit setting, where agents form (unknown) cluster-based structures. To address this challenge, the authors propose two algorithms. The first algorithm, CI-BAI, first clusters the agents and then identifies the best arm for a randomly chosen representative from each cluster. The second algorithm follows the same two phases but in a different order. Theoretical analyses of both algorithms provide insights into their sample complexity and communication efficiency. Additionally, experimental studies demonstrate the effectiveness of the proposed algorithms. ## Update After Rebuttal After reviewing the authors’ response, I have decided to maintain my current score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. The work is primarily theoretical but includes experiments with a reasonably well-designed evaluation. Theoretical Claims: I reviewed the flow of some parts of the proofs but cannot confirm the correctness of all of them. Experimental Designs Or Analyses: The design of the experiments conducted on synthetic and real datasets appears sound. Supplementary Material: I reviewed parts of the proof (Appendix 9.9) and the experimental section (Appendix 9.11). Relation To Broader Scientific Literature: The key contributions of paper are in direction of online decision-making (bandits) and distributed/federated learning. Essential References Not Discussed: I am not aware of any missing prior works. Other Strengths And Weaknesses: Strengths: 1. The paper addresses a novel problem in best-arm identification (BAI) with clustered bandits. 2. To tackle this problem, the authors propose two main algorithms. The second algorithm, BAI-Cl, presents a particularly interesting approach and demonstrates superior performance both theoretically and experimentally. 3. The paper establishes worst-case and instance-based lower bounds for the problem. 4. The conducted experiments show that the proposed algorithms are effective compared to the naive method and also the results align with some findings in the theoretical part. Weaknesses: 1. It is unclear why the authors did not use the current state-of-the-art algorithm for BAI to achieve better sample complexity. 2. The paper contains numerous typos and needs refinement. Additionally, a more comprehensive discussion of related work could be provided. Other Comments Or Suggestions: The paper lacks a discussion/conclusion section, and I believe the overall writing could be improved. Additionally, there are some typos, which I highlight below: * Line 117: It would be clearer to replace $ \Delta_{m, k^{*}_{m}} $ with $\Delta^{*}_{m}$ , for example, as the current notation is somewhat confusing. * Remark 5.1: "woth" → "with". * Line 1225: Possible typo and citation issue. Questions For Authors: 1. I did not fully understand Remark 4.1. Since the TAS algorithm is optimal, wouldn’t it be more reasonable to incorporate it into the proposed algorithm? What challenges prevent the direct application of TAS? 2. Are there any other relevant algorithms that could tackle the problem, even sub-optimally? In the experiments, you compared your approach only against a naive method for identifying the best arms. Are there any alternative approaches beyond yours and the naive baseline that could serve as a more competitive comparison? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Current state-of-the-art algorithm for Best Arm Identification (BAI):** We choose Successive Elimination (SE) over other state of the art algorithms like Track And Stop (TAS) or Lower Upper Confidence bound (LUCB) for a number of reasons: (i) We aim to address the problem of Clustering and BAI jointly, and SE is a common tool to address both of these problems. In other words, SE could seamlessly be adapted to our problem formulation. The same thing can not be said for TAS and LUCB. (ii) For TAS, in general asymptotic guarantees are known, where as we require sharp non-asymptotic guarantees for proving correctness of our proposed algorithms. Some non-asymptotic guarantees of LUCB is known, however they are not immediately adaptable to our problem formulation. Moreover, SE has $3$ tuning parameters, namely the subset of arms, success probability and the number of rounds. Overall, SE is easier to tune compared to other best arm identification algorithms like TAS. (iii) We would like to clarify that the order-wise performance (sample complexity) of both SE and TAS are similar, whereas SE may have sub-optimal constants. In this work, we provide guarantees on order-wise sample complexity and hence TAS and SE would yield similar results. (iv) In terms of experiments, one can use other BAI algorithms. However, TAS is computation heavy as compared to SE. Hence, we have taken SE as a default choice. **Typos and Related work** We apologize for the typos. We have corrected them in the modified version. We have also added a few relevant and recent papers in the Related Work section and provided comparisons with them. **The paper lacks a discussion/conclusion section** Thank you for the suggestion. Indeed, we have added a Conclusion section now. Additionally we have corrected the typos, actively worked on the writing, and re-organized a few section (including appendix) to improve readability. **Why optimal TAS is not used?** Although we have discussed this in the first question, we summarize the comparison with TAS here: (i) Note that for TAS, in general asymptotic guarantees are known, where as we require sharp non-asymptotic guarantees for proving correctness of our proposed algorithms. Moreover, SE has $3$ tuning parameters, namely the subset of arms, success probability and the number of rounds. Overall, SE is easier to tune compared to other best arm identification algorithms like TAS. (ii) We would like to clarify that the order-wise performance (sample complexity) of both SE and TAS are similar, whereas SE may have sub-optimal constants. In this work, we provide guarantees on order-wise sample complexity and hence TAS and SE would yield similar results. (iii) We aim to address the problem of Clustering and Best Arm Identification (BAI) jointly, and SE is a common tool to address both of these problems. In other words, SE could seamlessly be adapted to our problem formulation. The same thing can not be said for TAS. (iv) Finally, in experiments, one can use other Best arm Identification algorithm like TAS. Hoever, TAS is computation heavy compared to SE. Hence, we have taken SE as a default choice. **Other relevant algorithms and comparison** There are a lot of works in bandit clustering albeit in a parametric setting. In other words, for linear bandits and contextual bandits (which are parameterized), one can naturally define a clustering based on the underlying unknown parameters. On the other hand, needless to say that there is a rich literature on the Best Arm Identification (BAI) problem without clustering structure. However, we are not aware of any other papers in the intersection of non-parametric bandit clustering and BAI, where our work lies. Alternatively, there are a few works on the BAI problem for Federated Bandits. Out of these, we consider [1]. Note that if we remove the notion of *Global best arm* from [1] and only focus on recovering *Local best arms*, the proposed algorithm there reduces to the naive algorithm we compare against since they do not consider any underlying clustering among agents. We would like to point out that, since we are interested in the BAI for all the agents, naturally the notion of *Global best arm* makes little sense in our setup. **Reference:** [1] Almost Cost-Free Communication in Federated Best Arm Identification; Kota Srinivas Reddy, P. N. Karthik, and Vincent Y. F. Tan; AAAI 2023.
Summary: This work studies the problem of best arm identification for clustering of multi-armed bandits, where $N$ agents are grouped into $M$ clusters, with each cluster solving a stochastic bandit problem. The goal is to identify the best arm for each agent under a $\delta$-probably correct ($\delta$-PC) framework, while minimizing sample complexity and communication overhead. The authors propose two algorithms: Clustering then Best Arm Identification (C1-BAI) and Best Arm Identification then Clustering (BAI-C1). They provide $\delta$-PC guarantees for both methods, derive bounds on their sample complexity, and provide a lower bound for the problem class. They also propose a variant of BAI-C1 under additional assumptions, which is (order-wise) minimax optimal when $M$ is small. They also provide experimental results to validate the theoretical findings. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: No. Supplementary Material: No. Relation To Broader Scientific Literature: The work is closely related to the literature of clustering of bandits. The authors study the setting of best arm identification for clustering of MAB, which is not yet studied to the best of my knowledge. Essential References Not Discussed: This work is closely related to clustering of bandits. Though the authors have mentioned some works in this line, I think the related literature should be discussed more. Below are some references (not a complete list): 1. Online Clustering of Contextual Cascading Bandits, AAAI 2018. 2. Improved Algorithm on Online Clustering of Bandits, IJCAI 2019. 3. Federated Online Clustering of Bandits, UAI 2022. 4. Online Clustering of Bandits with Misspecified User Models, NeurIPS 2023. Other Strengths And Weaknesses: Strengths: 1. To the best of my knowledge, this is the first work to study the best arm identification in the clustering of MAB setting. 2. The paper is well-written and easy-to-follow. 3. The authors provide algorithms with sample complexity bounds, and prove a lower bound. 4. They also provide experimental results to support the theoretical findings. Weaknesses: The main weakness I am concerned about is the assumptions (Assumption 2.1 and Assumption 6.1), which seem to be strong. Additionally, for example, under Assumption 2.1, the algorithms need to know $\eta$ (or a lower bound). In the clustering of linear bandit literature, there is also a similar assumption about the separation gap of the different feature vectors, but it does not need to be known. I am wondering if these assumptions can be relaxed. Other Comments Or Suggestions: No. Questions For Authors: Please see the weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Additional references** We thank the reviewer for these pointers. As suggested by the reviewer, we will aim to do a more extensive review of the related literature on clustering in bandits. We would like to point out that one key difference between most of the literature there (including the papers pointed out above) and our work is in how the clusters are defined. While the literature mainly considers parameterized settings such as linear / contextual bandits where the clusters are based on the unknown user preference vectors, our setting is non-parameterized and the cluster definition is based on the mean reward vectors. **Assumptions 2.1 and 6.1** Thank you for raising this point. Note that our objective is to find the best arm for every agent in the system, and so it seems reasonable to *define* the clustering based on the best arms. Assumptions 2.1 and 6.1 essentially quantify a *separability* condition amongst clusters, based on how the best arm of a given cluster performs in other clusters. These assumptions allow us to provide analytical guarantees on the performance of our proposed algorithms; furthermore, as part of our numerical evaluation, we are able to at least identify a few real-life datasets where these assumptions do hold and our proposed schemes are able to provide significant savings in terms of sample complexity and communication cost. Having said that, there might be other definitions of separability which might also be suited to our setup. However, we choose this since it aligns well with our overall objective of best arm identification. **Knowledge of $\eta$** The reviewer's point is well-taken. Our response to this is two-fold: 1) *Robustness to $\eta$*: We have run additional experiments to illustrate that our algorithms are robust to the choice of $\eta$. For the Movielens and the Yelp datasets, we run our algorithms assuming different values of $\eta$ and have tabulated the results below. Each experiment is repeated 10 times. We have the following observations on the impact of $\eta$-misspecification on correctness and sample complexity. Firstly, as expected, when the assumed $\eta$ is smaller than the true value, the algorithm recovers the best arms correctly. Surprisingly, the same in fact holds true even with larger values of $\eta$ which illustrates that our algorithms are in fact quite robust to the choice of $\eta$. Again, as expected, the incurred sample complexity grows as the gap between the assumed $\eta$ and the true value increases, and there are errors in recovery when $\eta$ is chosen too large. *Movielens*: $M= 6, N = 120, K = 316 , \eta_{true} = 0.0027$ | $\eta$ | Cl-BAI (No. of Pulls) | Cl-BAI (Error) | BAI-Cl (No. of Pulls) | BAI-Cl (Error) | |-----------|----------------------|---------------|----------------------|---------------| | 0.16 | 7.9739185e+08 | 10 | 4.27459816e+08 | 10 | | 0.08 | 1.59879063e+09 | 0 | 9.29866052e+08 | 8 | | 0.04 | 3.15616948e+09 | 0 | 6.21368206e+08 | 5 | | 0.02 | 6.33333662e+09 | 0 | 1.02040616e+09 | 0 | | 0.01 | 1.26017946e+10 | 0 | 3.44881983e+09 | 0 | | 0.005 | 1.25831127e+10 | 0 | 3.73592584e+09 | 0 | | 0.0027 | 1.25917968e+10 | 0 | 3.90723456e+09 | 0 | | 0.0015 | 1.25652013e+10 | 0 | 3.71104286e+09 | 0 | *Yelp*: $M= 4, N = 80, K = 211 , \eta_{true} = 0.375$ | η | Cl-BAI #Error | Cl-BAI #Pulls | BAI-Cl #Error | BAI-Cl #Pulls | |-------|-------------|---------------|-------------|---------------| | 3 | 10 | 10991089.6 | 10 | 4930620.3 | | 1.5 | 10 | 13096451.2 | 0 | 973381.7 | | 0.75 | 0 | 13062668.6 | 0 | 1403340.3 | | 0.375 | 0 | 13071359.4 | 0 | 985600.6 | | 0.187 | 0 | 13079278.4 | 0 | 1148393.8 | 2) *$\eta$ free algorithm*: Alternatively, we can remove the knowledge of $\eta$ from our learning algorithms completely. We can propose a multi-phase algorithm where we start with a large enough value of $\eta$, and at the beginning of each phase, we halve $\eta$. After some phases, the value of $\eta$ falls below the actual gap and the algorithms start learning the best arm. If we select exponentially increasing phase length, we can show this multi-phase algorithm will succeed in finding the best arms of all the agents. Rigorously proving the correctness is part of our future plans.
Summary: The paper considers the problem of federated fixed-confidence best arm identification, where the agents are assumed to be clustered and the agents of the same cluster share the same bandit instance. The authors propose two algorithms, Cl-BAI (cluster-then-BAI) and BAI-Cl (BAI-then-cluster) and show their sample complexities with high probability. Under an additional assumption, the authors propose BAI-Cl++ with improved sample complexity. Then, the authors provide an in-expectation lower bound for the class of instances with mean reward gap assumption, which implies that BAI-Cl++ is orderwise optimal. The algorithms are tested numerically on various datasets, showing their efficacy. Claims And Evidence: See my comments on Theoretical Results and Experiments Methods And Evaluation Criteria: See Experimental Designs or Analyses Theoretical Claims: I didn't check the whole proof in detail, but I have checked and concurred the correctness of the lower bound proof. I want the authors to clarify the following issue during the proof of Cl-BAI: In line 658, it states that $i$ and $j$ will not be assigned to the same cluster if $|\hat{\mu}^i - \hat{\mu}^j| > \eta / 2$. However, as Cl-BAI is constructing a nearest neighbor graph, I don't think this is necessarily the case. It may be that there is a $k$ such that $|\hat{\mu}^i - \hat{\mu}^k| \leq \eta / 2$ and $|\hat{\mu}^j - \hat{\mu}^k| \leq \eta / 2$, yet $|\hat{\mu}^i - \hat{\mu}^j| > \eta / 2$ (e.g., consider $\hat{\mu}^i = \eta / 2, \hat{\mu}^j = - \eta / 2, \hat{\mu}^k = 0$) But, as $i \sim k \sim j$, $i$ and $j$ are indeed in a same cluster. Experimental Designs Or Analyses: The experimental designs seem appropriate. Some minor comments: 1. Error bars missing from all experiments (Figure 1, Figure 2) 2. The legends in Figure 1 is too small... As it seems that the legend is the same across the subfigures, consider separating the legend so that the text is legible... 3. Section 8 states that each algorithm went through multiple independent runs. How many? Supplementary Material: I briefly reviewed the Appendix containing the proof, but not in detail. Relation To Broader Scientific Literature: - To the best of my knowledge, it tackles a new problem setting of federated FC-BAI with clustering structure - Interesting algorithmic idea of running SE with coupon-collected best arm candidates. Essential References Not Discussed: None to my knowledge. Other Strengths And Weaknesses: **Strengths:** 1. New problem setting not considered before 2. Interesting algorithms, especially BAI-Cl; the idea of using the coupon-collected best arms as input to the SE is quite interesting and to the best of my knowledge, novel 3. Extensive related works 4. Experimental results showing the efficacy **Weaknesses:** 1. Writing should definitely be improved. The Appendix is especially riddled with unnecessary typos, hindering its readability. See Suggestions for a *partial* list of things that I found. This is one of the main reasons of my score not being higher; I could not check the correctness of the proofs in detail. 2. There was a potential error in the proof, which (even though I didn't check the remainder of the proof for the reason mentioned above) makes me question the rigorous correctness of the overall proof. The algorithm design itself looks solid. Other Comments Or Suggestions: 1. Why are all multiplications written as $a.b$? I've never seen this notation... 2. Line 290 (right column): extra $+$ 3. Line 346: $>>$ => $\gg$ 4. Line 698: $G_i, G_j$? 5. Line 1225: broken citation 6. Please use \appendix before starting the Appendix, and please consider reorganizing/restructuring the Appendix.. 7. Misnumbering in the Appendix: What is Theorem 1? (Sec. 9.3) 8. Not defined notations: what is $D(\cdot, \cdot)$ in line 657? 9. The authors should remind in the Appendix that $\epsilon_r = 2^{-r}$. 10. Why say **proof sketch**? 11. Table of sample complexities as well as communication costs for the three algorithms would help a lot in comparing those. 12. It seems that the number of clusters $M$ does not need to be known in advance. The authors should mention this explicitly, as if $M$ is known, then one could just do 1-dimensional $M$-means clustering. 13. (very minor) Could the authors consider using the author-year format (or something similar) throughout the main text? It's very hard to keep track of the references when they are referred to only by numbers. 14. Conclusion (and preferably Future Work) should be included. Maybe move some of the related work part to the Appendix. Questions For Authors: 1. Can the authors comment on the optimality of the communication cost? I also suggest putting in such discussions in the main text. 2. In Algorithm 1 line 17, is the agent selected randomly from each cluster? I don't think this is critical, but for rigorousness, this should be made precise. 3. Can the authors elaborate on why SE is "*easy-to-tune*"? 4. The algorithms are dependent on the knowledge of $\eta$. What happens if the learner misspecifies $\eta$ by overestimating or grossly underestimating it? Especially for the latter case, the authors mentioned in Remark 4.2 that it doesn't change the theoretical results. Does this mean that the theoretical guarantees with the *same* $\eta$ hold regardless of which $\eta' \leq \eta$ is used? Or is the guarantees holding with $\eta$ replaced with $\eta'$? 5. Going further, one future work the authors could mention is making the algorithm parameter-free. For instance, Ghosh et al. (2023) (ref. [40]) proposed a double trick-type wrapper algorithm whose guarantee does not depend on the problem-specific parameter. 6. Instead of constructing a nearest neighbor graph, how about taking a similar approach to Algorithm 2 of Yun & Proutiere (2016)? 7. Overall, what is the main reason (theoretical intuition) for the performance gap between Cl-BAI and BAI-Cl? 8. The upper bounds are with high probability, but the lower bounds are in-expectation. Any chance of closing this gap, either by adapting high-probability lower bound techniques (multiple hypotheses) of Tsybakov (2009), or different analysis of the algorithms? Se-Young Yun and Alexadre Proutiere. Optimal Cluster Recovery in the Labeled Stochastic Block Model. NIPS 2016 (https://arxiv.org/abs/1510.05956) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Nearest neighbor and pairwise distance:** The claim being discussed here considers the `bad' event that two agents belonging to the same cluster get assigned to different clusters, i.e., there exists an arm in the union of the active sets of these two agents, whose estimated means for agents $i$ and $j$ differ by more than $\eta/2$. Our claim is a high probability result and it implies that while the bad event (including for example the specific case that the reviewer has mentioned) can happen, it will occur with very small probability because the underlying ground truth dictates that for any $i,j,k$ which are in the same cluster, their true mean reward vectors will all be identical and thus, when the SE procedure (as prescribed by our algorithm) is run by each of them in the first (clustering) phase, it is very unlikely that the mean reward estimates will be significantly far apart. The claim essentially provides an upper bound on the probability that the bad event occurs. **Number of independent runs** 50 **Writing, Typos, Figures** We apologize. We have actively worked on the writing in the revised draft and also improved the figures. **Knowledge of $M$:** The number of clusters $M$ is not needed to be known for Cl-BAI, where the clusters are created based on a nearest-neighbor graph type construction. For BAI-Cl and BAI-Cl++, knowledge of $M$ (or an upper bound) is needed since the first phase corresponds to recovering the set of all possible best arms. **Communication cost:** We believe there is a tradeoff between the sample complexity and the communication cost; we see some evidence of this in our analysis. Any algorithm will incur at least $N\cdot \log M \cdot c_b$ communication cost since the learner has to infer each agent's cluster membership. However, we believe this to be weak and identifying the optimal tradeoff is of interest. We have included a more thorough discussion on this. **Random selection in Algo 1:** The agent is selected arbitrarily from each cluster. We have clarified this in the revised manuscript. **SE is easy-to-tune:** We say this because SE provides a general procedure which can be altered using three `knobs': the subset of arms, the target error probability, and the number of rounds. We use different combinations of these parameters in different schemes (as well as stages of the same scheme) to achieve various guarantees on surviving arms, their confidence interval sizes etc. **Robustness with respect to $\eta$:** The reviewer is correct in stating that if $\eta' \leq \eta$, our sample complexity results hold with $\eta$ replaced by $\eta'$. We have run additional experiments to validate the robustness of our schemes with respect to $\eta$. Please see the response for Reviewer UFPL. **Parameter-free algorithm** We can remove the knowledge of $\eta$ from our learning algorithms completely. We can propose a multi-phase algorithm where we start with a large enough value of $\eta$, and at the beginning of each phase, we halve $\eta$. After some phases, the value of $\eta$ falls below the actual gap and the algorithms start learning the best arm. If we select exponentially increasing phase length, we can show this multi-phase algorithm will succeed in finding the best arms of all the agents. Rigorously proving the correctness is part of our future plans. **Yun $\&$ Proutiere, 16** Yun and Proutiere consider cluster recovery in the Stochastic Block Model, where the feedback structure (edge label) is quite different to our setting. However, we do agree that some similar spectral decomposition based method might be feasible for our setting as well. **Performance gap between Cl-BAI and BAI-Cl** Cl-BAI clusters the users in the first phase and then employs one agent from each cluster to identify the corresponding best arms. However all $K$ arms remain active throughout, including the first phase where all the agents participate. On the other hand, in Cl-BAI, the first phase reduces the active set of arms from $K$ to only $M$ using participation from only $O(M\log M)$ agents and this provides a great reduction in the sample complexity. **Expected sample complexity** Similar to several works of best arm identification (BAI) in multi-armed bandits, our upper bounds (achievability) are stated as high-probability results whereas the lower bounds are in expectation. However, there are works available in the literature which prove expected sample complexity upper bounds for BAI. For example, Kalyanakrishnan et. al. (2012) for the LUCB scheme and Even-Dar et. al. (2006) for SE style schemes. We believe that similar ideas can potentially be used to derive expected sample complexity bounds for our schemes as well. **References** 1) S. Kalyanakrishnan et. al., PAC subset selection in stochastic multi-armed bandits, ICML 2012. 2) E. Even-Dar et. al., Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems, JMLR 2006. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I also apologize for the delayed rebuttal comment. Overall, I am satisfied with the authors' responses to my review and the other reviews. One primary concern was the knowledge of $\eta$, which the authors have shown via experimental results that the algorithm is rather robust to misspecification of $\eta$ and that a multiphase, parameter-free algorithm should be possible. In light of this, I raise my score. But, as reviewer n5GW has suggested, **make sure** to include discussions regarding the knowledge of $\eta$ and the parameter-free algorithm that the authors have promised!
null
null
null
null
null
null
Design Considerations in Offline Preference-based RL
Accept (poster)
Summary: The paper provides a theoretical study of offline learning methods from human preferences. The authors first establish a unified framework and relevant assumptions which fit most all of preference-based learning losses. Then, they propose a policy benchmark which is used to measure the quality of a policy output from optimizing empirical losses on preference data. Theorem 3.6 provides bounds on the closeness of these two policies. The findings suggest that using the squared loss is the best choice in that its optimizer is the closest to the benchmark "ideal" policy among other losses. These findings are also supported by some simple experimental results on language models. Claims And Evidence: The claims made in the submission are supported by rigorous theoretical proof and experimental results. Methods And Evaluation Criteria: The proposed models, dataset and losses are all relevant to the considered setting. Theoretical Claims: I checked the correctness of the proof of Theorem 3.6 and, to the best of my understanding, it is clear and concise. Experimental Designs Or Analyses: I did not replicate the experimental results of the paper. Supplementary Material: I have not reviewed the supplementary material. Relation To Broader Scientific Literature: The key contribution of the paper is the theoretical insight of the benefits of using squared losses when directly optimizing over a given preference dataset. These findings are definitely relevant to the literature of learning from human preferences. Essential References Not Discussed: There are no essential references missing, although the paper could benefit from a more comprehensive related work section in the broader theoretical RLHF literature (as opposed to the more narrow offline PbRL literature which is mentioned). Other Strengths And Weaknesses: The paper is clear, concise and easy to follow. The added contribution is indeed original as it unifies previously proposed loss function by considering the necessary assumptions which all of them seem to satisfy, and providing a general upper bound that depends on parameters of interest. The implications of Theorem 3.6 point out the seeming superiority of IPO in terms of producing a policy that is closer to the benchmark policy. This makes for an interesting finding. Other Comments Or Suggestions: N/A Questions For Authors: 1. I understand that in practice, the reference policies and the policies to be optimized are often non-degenerate. However, they can have negligible entries in the sense that $\mu(y|x) \approx 0$ or $\pi(y|x) \approx 0$ for some $x,y$. That makes the bounds on $\log\pi(y|x)$ or $\log\pi(y|x) / \log\mu(y|x)$ extremely large, especially when you consider it over the data generating distribution, not only the data itself. Moreover, the presence of such a bound as an exponent in the upper bounds on Theorem 3.6 seems concerning for such border cases. Can Assumption 3.1 be relaxed? If it doesn't hold, is there a way to get meaningful bounds? 2. Although the squared loss seems to be superior to the other losses under the assumptions made, it is not particularly clear how realistic it is. Specifically, the logisitc loss, for instance, is not an arbitrary choice, but results from the synthesis of the RLHF pipeline into one step. Thus there is an inherent intuition behind its definition. Is there a practical intuition behind the squared loss? Under what preference generation model, if any, does it make sense? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your encouraging feedback on our work. We address your key questions below. **1. Relaxing Assumption 3.1**: You are correct in noting that having an exponential dependence on the size of log-ratios is not ideal. But this limitation is shared by a line of prior works [1], [2], [3]. While it would be desirable to weaken this dependence, we also note that the degradation of learning quality as the log probability ratios grow large during learning is quite clear in our experiments. As we see in Figures 1 and 2, the drop of log probabilities under the learned policy coincides with degradation in the performance of DPO. Since the reference policy has well-behaved log probabilities, the probability ratios look nearly identical to the learned policy’s log probabilities here. So we also think that the bounds capture a phenomenon actually realized in the experiments qualitatively, even though we might be able to improve the precise functional forms from being milder than exponential in future work. **2. Generating model underlying squared loss**: We thank you for the insightful question about the generating model underlying the squared loss. We refer you to Appendix A, which provides this discussion for general link functions, as well as gives the specific preference generation model for squared loss in lines 591-592: $P(\omega=1|x, y, y’) = \frac{1}{2} + \frac{R^\star(x, y) - R^\star(x, y’)}{2}$. We thank you again for your thoughtful feedback, and will add these discussions to the final version of the paper.
Summary: This paper explores the theoretical aspects of offline preference-based reinforcement learning (PBRL). It examines a broad range of offline PBRL methods, including DPO and IPO, and establishes theoretical bounds on the sub-optimality of the policies learned by these methods. The analysis is based on specific assumptions regarding the loss functions and base policies, with the derived bounds also depending on certain quantities determined by the loss functions. Consequently, the theoretical findings offer some insights into selecting appropriate loss functions and base policies. Lastly, the paper presents experiments to validate the theoretical results. Claims And Evidence: ## Some of the key claims made in this paper are not well supported by clear and convincing evidence. 1. The paper claims that its theoretical results provide insights into design choices such as the selection of loss functions and base policies for offline RLHF. However, the provided evidence does not fully support this claim. Regarding the loss function, Remark 3.7 suggests that the squared loss is preferable due to its curvature properties but fails to satisfy the realizability assumption. However, the paper does not extend its theoretical analysis to scenarios where this assumption is violated. As a result, it remains unclear when one should choose a particular loss function in practice. Similarly, regarding the base policy, Remark 3.8 states that its selection affects the benchmark policy. This implies that different variants with distinct base policies are not evaluated against a common standard, reducing the meaningfulness of theoretical comparisons. 2. Additionally, there is a significant gap between the theoretical analysis and experimental validation. The primary discrepancy lies in the performance metrics used. The theoretical results rely on KL divergence with respect to the benchmark policy, whereas the experiments evaluate performance using evaluation preference and the log probability of winning or losing. Since the theoretical and empirical evaluations employ fundamentally different measures, the experimental results do not convincingly support the proposed theory. Methods And Evaluation Criteria: This paper primarily analyzes existing methods rather than proposing a new one. However, it introduces a benchmark policy specifically for the theoretical analysis of offline RLHF methods. This benchmark policy is defined as the policy that attains the pointwise minimum of the loss function. Under the Bradley-Terry (BT) model assumption, this benchmark policy coincides with the standard KL-regularized reward-maximizing policy. Given this alignment, I find the proposed benchmark policy to be a reasonable choice for studying offline RLHF. Theoretical Claims: I only checked the proof sketch in Section 4 and did not check the detailed proof. I feel the results are correct. Experimental Designs Or Analyses: Please see my comments in the above Claims And Evidence part. Supplementary Material: I had a rough look at the appendix which includes a discussion on proper loss and experiment details. Relation To Broader Scientific Literature: This paper studies offline RLHF for LLM alignment and may inspire advancements toward training better LLMs. Essential References Not Discussed: To the best of my knowledge, this paper provides a sufficiently thorough discussion of all closely related works. Other Strengths And Weaknesses: Please see my comments in the above Claims And Evidence part. Other Comments Or Suggestions: 1. The notation $\ell$ is used inconsistently. Before line 255, the loss function $\ell$ is defined as taking a single variable as input. However, in line 255, $\ell$ is instead expressed as a function of two variables. This inconsistency may cause confusion and should be clarified. Questions For Authors: 1. What is the meaning of the notation $\Pi_{\mathcal{R}}$ under Assumption 3.1? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank you for your thoughtful feedback on our work. You are right in noting that the main contributions of our work are theoretical, and not in proposing new methods. We address your main concerns with our theoretical results below. **Squared loss and realizability**:. You are correct that it is not the case that the squared loss is a clear win across all the theoretical criteria, and makes a less natural realizability assumption. We note that we can use standard arguments to extend our analysis to the case where the realizability assumption holds uniformly, up to an error $\epsilon$. We did not include this for clarity of exposition in the submission, but can easily add this to the final version. More generally, our view is that realizability is an unverifiable assumption in practice, but is perhaps more likely to hold for rich model classes such as LLMs. At the same time, the curvature of the loss is something we can control better. So an approach that primarily favors the squared loss as a default objective based on the curvature properties is reasonable practically, since it will yield good results, unless realizability is violated, in which case a practitioner can switch to other losses, such as logistic. This tradeoff between the intuitive modeling niceness versus optimization and learning tractability is often encountered in the literature on generalized linear models as well, and we will expand on this discussion in the final submission. **Choice of loss function affects benchmark policy**: You are also correct that the choice of the base policy affects the benchmark policy. As we note on line 161 (right), the most natural benchmark, which would be independent of these choices, cannot be recovered by the class of offline methods including DPO and variants which are studied here. That said, the base policy is a part of the loss function. It is quite standard even in the literature on simpler classification and regression setups that when the loss changes, the convergence point changes accordingly, and we can only analyze convergence to the optimal solution according to the chosen loss. Indeed, as we notice in Remark 3.8, the choice of base policy has no explicit role in the convergence bound, but only plays a more subtle role in influencing the realizability assumption and the benchmark policy. We think it is still useful to uncover the different points of convergence that different choices used in practice correspond to, so that we can prefer methods which correspond to desirable points of convergence. **Gap between the performance measures in theory and experiments**: We agree that it would have been great if we could just have a similar KL bound or an easy surrogate in our experiments as well. Unfortunately, without access to the unknown benchmark policy, this is an impossible task. Consequently, we measure the most natural quantity that we could think of empirically, and that is also consistent with the broader literature. While not perfect, we believe this is still insightful. If the reviewer has additional recommendations for performance measures that are measurable, and that would narrow the theory-practice gap, we would be happy to include them in the final submission. **Notation**: We apologize for the inconsistency regarding the notation for loss, we will fix it. The notation $\Pi_{\mathcal{R}}$ is a typo from an earlier version. It should just be $\Pi$. Thanks again for your insightful comments. We hope that our responses address some of your concerns, and we would be happy to discuss further if you have additional feedback.
Summary: The paper offers a theoretical analysis of offline preference-based RL algorithms such as DPO. Specifically, the authors investigate an empirical observation: offline preference-based RL is often worse than online RL, and faces some degeneracies during optimization. To set up the problem, the authors first set up a general policy class to include different regularization methods and then make 5 assumptions: bounded log prob and loss; realizability ;proper loss; coverage and curvature. The main conclusion is Thm 3.6, which is an error bound decided by $c_\mu$, $C$ and $\epsilon$. As a result, losses with a high curvature will have a larger $c_\mu$, therefore better error bounds. Moreover, base policy and constraints also play a role: different base policies may change the $\pi^*$, and different constraints make different assumptions on $\pi^*$. Lastly, the Authors show that DPO may have large R and small $c_\mu$, causing degeneracies observed in previous works. Experiments show that square loss is superior to logistic loss and $\pi_{ref}$ performs better than uniform distribution. Claims And Evidence: ### Square loss is better than logistic loss Evidence: * The $c_\mu$ of square loss is ½, yielding a reasonable bound * The experiment shows that square loss is better than logistic loss Question: * The $c_\mu$ should be 2 instead of ½? ### Base policy matters, and $\pi_{ref}$ is a relative good choices Evidence: * Jointly supported by Section 3.1 and experiment ### Different constraints play a role in the $\pi^*$ assumption, which may affect results Evidence: * Remark 3.9. Question: * Is it possible to draw a plot of CPO to justify this assumption? ### DPO faces degeneracies because of large R and low $c_\mu$ Evidence: * Remark 3.10 Methods And Evaluation Criteria: The methods and evaluation make sense. Theoretical Claims: Strengths: * I have a rough glance at Section 4 and do not find obvious mistakes. Weakness: * The Assumption 3.5 may be too strong and prevent analysis on many possible losses, for example, sigmoid. Experimental Designs Or Analyses: Strengths: * Experiments and analyses make sense. Weakness: * As I mentioned earlier in “Claims and Evidence”, it would be better to add experiment such as CPO to justify the claim on constraints. Supplementary Material: N/A Relation To Broader Scientific Literature: Strengths: * Different from other papers such as GPO, which gives empirical analysis, this paper gives a unified theoretical analysis and understanding on the offline preference-based RL. Weakness: * The assumption 3.5 restricts the theoretical analysis to a limited class of losses. Specifically, the authors analyze square loss and show its superiority to logistic loss. However, similar analyses and conclusions have been found in prior works such as IPO. Essential References Not Discussed: It may be beneficial to discuss the difference between IPO’s theory analysis and this paper’s. Other Strengths And Weaknesses: Other Strengths * It is good to see that authors report error bars in the experiments, which should be encouraged * I personally think the paper is clearly written overall. Other Weakness * See above Other Comments Or Suggestions: * Line 366 right side: “consistently” -> “consistently better” * Line 373 right side: “Figure 1(left)” -> “Figure 1 (right)” ## update after rebuttal I keep my score. Questions For Authors: See above. My main concern is that Assumption 3.5 seems a bit strong to prevent analysis on other losses, and I will be happy to raise my score if authors can address these concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your encouraging and thoughtful comments. **Severity of restrictions due to Assumption 3.5**: You are correct in noting that Assumption 3.5 places some restrictions on the loss function, but we believe that these restrictions are relatively mild and standard as we discuss below: 1. Please note that the curvature condition is not pointwise (i.e. per x, y, y’ triple), but only in expectation, and not uniform, but only local around the optimal policy $\pi^\star$. So the required assumption is local curvature of the expected loss around the optimal solution, which is a very common assumption in optimization and convergence theory. 2. The curvature assumption, while not explicitly stated, is responsible for the e^B term in almost all prior works on analysis within the Bradley-Terry-Luce model with the logistic loss, such as [1], [2] and [3]. Please also see Remark 4.9 in [1] for an explicit mention of this aspect. In fact, even in simpler settings such as analysis of generalized linear models, a similar curvature assumption can be seen in [4], Assumption 1. While we agree that weakening such conditions further would be highly desirable, addressing this goes well beyond the study of preference-based learning methods, and would be a significant undertaking to be carried out in a separate work. **Differences to analysis in IPO paper**: We apologize for missing the discussion of differences from the analysis in the IPO paper in the submission. The authors in IPO indeed note the issue of the curvature in the link function, and use this to motivate the IPO algorithm. However, there is no explicit quantitative analysis of the role of the link function beyond a simple example, since the focus is more on obtaining a practical algorithm for the identity link function case. We instead derive precise convergence guarantees, and also highlight the role of offline data coverage which is not captured in the IPO paper. We thank you again for the insightful comments, and we will update the final submission with the discussions above, as well as address your other suggestions. References: * [1] Sharp Analysis for KL-Regularized Contextual Bandits and RLHF, arXiv 2411.04625. * [2] Principled Reinforcement Learning with Human Feedback from Pairwise or K-wise Comparisons, ICML 2023. * [3] Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint, ICML 2024. * [4] Parametric Bandits: The Generalized Linear Case, NeurIPS 2010. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' rebuttal. I decided to maintain my tendency for acceptance after reading the rebuttal.
Summary: The paper investigates offline reinforcement learning methods that use a fixed dataset of responses and human preference feedback to align language models. It analyzes how various design choices in methods like DPO, IPO, and SLiC affect the quality of the learned policy. The study provides a unified theoretical framework that bypasses traditional reparameterization arguments and includes an empirical validation on a standard summarization benchmark. Claims And Evidence: The claims about bound on the sub-optimality and other theoretical insights are well supported. Methods And Evaluation Criteria: The benchmark DPO methods are appropriate and diverse enough. Multiple loss functions are also studied. Given the loss function and the data coverage assumption, the main bounds are derived. Theoretical Claims: Section 4 analysis contains accurate derivation of the proof of Theorem 3.6 which is intuitive and accurate. Experimental Designs Or Analyses: The experimental results might be limited in terms of various datasets and backbone models. However, the major ablation sufficiently demonstrates the theoretical point regarding the assumptions. Supplementary Material: N/A Relation To Broader Scientific Literature: Related to major DPO methods [1,2] where this paper focuses on various design choices and how they affect the learning of an optimal policy. [1] Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., and Finn, C. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024b. [2] Azar, M. G., Guo, Z. D., Piot, B., Munos, R., Rowland, M., Valko, M., and Calandriello, D. A general theoretical paradigm to understand learning from human preferences. In International Conference on Artificial Intelligence and Statistics, pp. 4447–4455. PMLR, 2024. Essential References Not Discussed: Multi-negative DPO [1] could be further discussed. In addition, group-based regularization [2] could also be discussed. [1] Chen, Yuxin, Junfei Tan, An Zhang, Zhengyi Yang, Leheng Sheng, Enzhi Zhang, Xiang Wang, and Tat-Seng Chua. "On softmax direct preference optimization for recommendation." arXiv preprint arXiv:2406.09215 (2024). [2] Ramesh, Shyam Sundhar, Yifan Hu, Iason Chaimalas, Viraj Mehta, Pier Giuseppe Sessa, Haitham Bou Ammar, and Ilija Bogunovic. "Group robust preference optimization in reward-free rlhf." Advances in Neural Information Processing Systems 37 (2024): 37100-37137. Other Strengths And Weaknesses: The theoretical bound of sub-optimality is intuitive and useful. Empirically, the setting could use more ablation study including various backbone models, datasets, etc. Other Comments Or Suggestions: See Strengths and Weaknesses Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for the encouraging feedback. We will add the suggested references [1, 2] to our discussion.
null
null
null
null
null
null
Improving LLM Video Understanding with 16 Frames Per Second
Accept (poster)
Summary: The paper explores high FPS in video understanding with MLLM. It is an interesting and meaningful attempt and the authors employ some techniques to solve the problem of excessive number of tokens. The performance gain is promising in some specialized scenarios like sports as expected. ## update after rebuttal I appreciate the author's response, but I still have concern on the temporal design of pre-fusing 16 or other number of frames within one second. The current temporal design equals to decoupled visual temporal perception module with a LLM. The utilization of high frame rate input is restricted by the temporal fusion module. A smarter token selection rather than fusion enables more systematic VLM for processing high frame rate videos. Claims And Evidence: The claims are supported by the experimental results. Methods And Evaluation Criteria: 1. The method of merging high frame-rate tokens is very simple, utilizing the generalization from single-frame to multiple-frame scenarios. However, this strategy has limited temporal capacity and cannot fully leverage the temporal dynamics in the high frame-rate input, slightly superior to average pooling. Either heuristic token preprocessing or more systematic model design is expected. 2. The evaluation on existing general video benchmarks cannot showcase the advantage of high frame-rate training, so the authors extend to the high-speed sports scenarios. It is necessary to supplement more scenarios beyond sports, compare with more video specialist models and conduct more comprehensive ablations on the high frame-rate token processing. Theoretical Claims: Correct Experimental Designs Or Analyses: More analysis on the aligner is desired, including input temporal range, output number of tokens, inserting positions, etc. Supplementary Material: The supplementary material showcases the examples of sports data. It is better to present the visualizations of model inputs under different frame rates. Relation To Broader Scientific Literature: The high frame rate exploration is a significant part that pushes VLM to achieve human-level perception, which can be related with human perception literatures. Essential References Not Discussed: No missing references Other Strengths And Weaknesses: The problem is meaningful but the authors should study more technical designs on the aligner and conduct more comprehensive experiments. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful and constructive feedback on our paper. Below, we will respond to the questions you have raised. --- 1: **The evaluation on existing general video benchmarks cannot showcase the advantage of high frame-rate training. & Supplement more scenarios beyond sports** In fact, in general video understanding, F-16 also demonstrates significant advantages in short video benchmarks. Note that F-16 uses less data (less video data and image data) compared to most of the models in Table 1, and it only uses LLaVA-Video-178k. Besides, as for more scenarios, we also evaluate F-16 on a very recent benchmark, FAVOR-Bench, which targets evaluating **fine-grained video motion understanding**. F-16 achieves SOTA results on it among 7B models. We list the results on these benchmarks here to more clearly present the advantages of F-16: | | Video-MME Short | NExT-QA | TemporalBench | MotionBench | FAVOR-Bench | | ---------------- | --------------- | ------------------ | --------------- | --------------- | ------------------ | | Previous 7B SOTA | 75.7 (NVILA) | 83.2 (LLaVA-Video) | 24.7 (Qwen2-VL) | 52.0 (Qwen2-VL) | 41.5 (VideoLLaMA3) | | F-16 | **78.9** | **84.1** | **37.2** | **54.5** | **46.0** | For long video understanding, although F-16 is not outstanding, it still shows competitive results. This is because only when processing short videos, the frame sampling is done at FPS = 16, matching the training situation. For long videos, the FPS is much lower than 16. For instance, as F-16 samples a maximum of $110\times 16=1760$ Frames, it perceives a 1760-second video at FPS=1. Sparse frame sampling will lead to slightly poorer performance of the high-frame-rate aligner, resulting in average results for long videos. --- **2: More technical designs and experiments on the aligner.** We studied various high-frame-rate modeling structures. All attempts confirmed that visual-language alignment via linear transformations helps prevent model performance decline, which means linear projections effectively and efficiently align the visual encoder's output semantic space with the LLM's input space. This is consistent with prior work like NVILA. Appendix B details the studied structures. Using CNN modules to capture frame spatiotemporal differences led to worse results than MLP, suggesting CNN aligners may harm semantic information derived from the visual encoder. Replacing max pooling with a learnable linear layer improved training loss but is worse when testing. Using a self-attention layer to replace the MLP projector's first linear layer to extract frame dynamic changes also affected visual feature semantics, resulting in slightly worse performance when scaling up the training parameters. Ultimately, we chose a 2-layer MLP structure. Its first linear layer ensures semantic alignment between LLM input and visual encoder output spaces, and the second compresses duplicated information in continuous frames. It is well-known that any continuous mapping function can be represented arbitrarily accurately by a 2-layer MLP with sufficiently large hidden layer dimension, which provides an insight into our motivation. Experimental results also validate this design. --- **3: More analysis on the aligner is desired, including input temporal range, output number of tokens, inserting positions, etc.** As for the temporal range, we have trained models at different FPS, and set the width $w$ of the processing window equal to the FPS. As the input FPS gradually increases from 1 to 16, the performance of the model shows an upward trend, shown in Fig. 3(b) "Train FPS=Test FPS". Besides training, testing at different FPS is also tried, shown in Fig. 3(b) "Train FPS=16". Though the input number of tokens to the aligner increases with FPS, the output number of tokens remains the same due to the processing window width of the aligner remains the same with FPS. This setting enables a fair comparison of models with different frame rates. For inserting positions, since the high-frame-rate aligner is a replacement to the single-frame aligner, only pre- and post-pooling used by other VLLMs are evaluated. Table 3 shows pre-pooling causes larger differences between adjacent frames, and Table 4 shows post-pooling performs much better under high-frame-rate settings. These indicate that the high-frame-rate aligner heavily relies on inter-frame variations in visual features for effective learning. If adjacent sampled frames are no longer similar to each other, the high-frame-rate aligner will struggle to learn well. --- **4: Present the visualizations of model inputs under different frame rates.** We visualize videos at different frame rates here: https://github.com/F-16-LLM/Rebuttal/blob/main/README.md Under low-frame-rate cases like FPS = 1, many details will be missed. We will add this part to the updated paper.
Summary: This paper studies the problem of high-frame-rate video understanding. The authors claimed that existing methods for video understanding merely sample video frames at a low FPS (mostly lower than 2), where there exists critical information loss. To tackle this problem, they introduce F-16, a novel multimodal large language model (MLLM) specially designed for high-frame-rate video understanding at 16 FPS. The main contributions are summarized as: 1) [model]: The first high-frame-rate video LLM, 2) [Benchmark]: A High-frame-rate sports video benchmark, and 3) [method]: Efficient variable-frame-rate decoding. Claims And Evidence: The main claim of the paper is that "the existing paradigm for video understanding (sampling at around 2FPS) is sub-optimal, which would lose much information when facing highly dynamic videos". This claim is clear, reasonable, and supported with convincing evidence. Methods And Evaluation Criteria: Yes Theoretical Claims: There are no proofs in the submission. Experimental Designs Or Analyses: The experiments are conducted on both the proposed high-frame-rate benchmark (NBA videos) with manual annotations and public video understanding benchmarks. The experiment protocols are reasonable. Supplementary Material: The authors provide an appendix at the end of the main paper. It provides further explanation about the proposed benchmark, more ablation studies, and case studies. Relation To Broader Scientific Literature: The key contribution is to extend the existing Video-LLMs from a low frame rate to a high frame rate, which is more natural and more suitable for analyzing highly dynamic videos. To my best knowledge, this is the first work to do such exploration. Essential References Not Discussed: No Other Strengths And Weaknesses: Generally, this is a good paper about LLM-based video understanding. Extending existing paradigms to 16FPS is a non-trivial setting. The proposed high-frame-rate aligner can well-balance the efficiency and performance. Other Comments Or Suggestions: N/A Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the positive rating for the paper. We sincerely appreciate your recognition of our work. --- Rebuttal Comment 1.1: Comment: Thanks for the response from the authors. I'm keeping my original rating.
Summary: This paper proposes a new method F-16 that increases the frame rate of existing video LLM to 16 frames per second (FPS). The paper argues that existing video LLMs, which typically operate on low frame rates (e.g., 1 FPS), lose crucial dynamic visual information. F-16 aims to address this by processing videos at a significantly higher frame rate while employing a visual-text aligner to compress redundant visual information within 1-second clips. This allows the model to capture subtle but important motion cues. The paper claims that F-16 achieves state-of-the-art performance among 7B parameter video LLMs on both general (Video-MME, TemporalBench) and fine-grained video understanding benchmarks. Further, it excels in complex spatiotemporal tasks such as high-speed sports analysis, outperforming models like GPT-4o and Gemini 1.5 Pro. The authors also propose a novel decoding method enabling efficient low-frame-rate inference without retraining the model. Claims And Evidence: The claims made in the submission are generally well-supported by evidence. Claim: Higher frame rates enhance video understanding. This is supported by the consistent performance gains observed across various benchmarks (Video-MME, TemporalBench, MotionBench, sports datasets) when comparing F-16 with other 7B models. Claim: F-16 excels in high-speed sports analysis. Table 2 demonstrates a significant performance advantage of F-16 in tasks like gymnastics, diving, basketball, and football compared to other video LLMs. The comparison to GPT-4o and Gemini 1.5 Pro is also compelling. Methods And Evaluation Criteria: The proposed methods and evaluation criteria seem reasonable for the problem of video understanding. High-frame-rate sampling: The decision to use 16 FPS is empirically motivated and aligns with the need to capture more dynamic visual information. The claim that this is a better trade-off is reasonable. Visual-text aligner with 3-layer MLP: The proposed aligner is designed to transform visual features to text like tokens. The MLP design is justified for its efficacy and compatibility with existing image encoders. Evaluation benchmarks: The use of standard video understanding benchmarks like Video-MME, TemporalBench, MotionBench, and sports datasets is appropriate for evaluating the performance of F-16. The choice of metrics (Accuracy, F1-score) is also standard for the tasks considered. Theoretical Claims: There are no theoretical claims in the paper that require proof checking. Experimental Designs Or Analyses: The experimental designs and analyses seem sound. Comparison with other models: The paper compares F-16 with several state-of-the-art video LLMs and proprietary models (GPT-4o, Gemini 1.5 Pro) to demonstrate its effectiveness. Ablation studies: The ablation studies on pooling strategies and alignment strategies (Table 4 and Table 5) provide insights into the importance of different components of F-16. Analysis of high-frame-rate aligner: The analysis of visual features output by the image encoder and the cosine similarity analysis helps to understand the benefits of the high-frame-rate aligner. Supplementary Material: I only briefly reviewed the supplementary material. I found the examples of sports data and video captions helpful in understanding the model's capabilities. However, a more detailed description of the training procedure, including hyperparameter settings and training time, would be beneficial. Relation To Broader Scientific Literature: The paper builds upon the existing literature on video LLMs, particularly those that focus on improving video understanding by leveraging pre-trained image encoders and modality alignment techniques. The key contribution of this paper is the emphasis on higher frame rates, which has been relatively unexplored in the context of LLMs. The paper acknowledges and compares its approach with existing methods for video processing and temporal input compression. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: -Novelty: The focus on high-frame-rate video understanding is a novel and important direction for video LLMs. -Performance: F-16 achieves state-of-the-art performance among 7B models. -Efficiency: The variable-frame-rate decoding method enables efficient inference without retraining. Completeness: The paper is well-written and provides sufficient details about the proposed methods and experimental results. Weaknesses: While variable frame rate decoding is discussed, the actual cost and memory footprint requirements are not shown, this is a limiting factor. Other Comments Or Suggestions: The paper could benefit from a more detailed analysis of the computational cost of F-16, including the memory footprint and inference time. It would be interesting to explore the potential of using even higher frame rates (e.g., 30 FPS) and the trade-offs between performance and computational cost. Questions For Authors: Can you provide more detailed information about the training procedure, including hyperparameter settings, training time, and the resources (e.g., number of GPUs) used for training? This information would help to better assess the reproducibility of the results. If the training process is relatively efficient, it would make the paper more appealing to the wider community. What are the computational costs (memory footprint, inference time) of F-16 compared to other video LLMs? This information is important to understand the practical limitations of the proposed approach. If F-16 has a manageable computational cost, it would strengthen the paper's claim of being a practical solution for high-frame-rate video understanding. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful and constructive feedback on our paper. Below, we will respond to the questions you have raised. --- **1: Can you provide more detailed information about the training procedure, including hyperparameter settings, training time, and the resources (e.g., number of GPUs) used for training?** Regarding the training procedure, F-16 is first initialized from LLaVA-OneVision-7B, as shown in Sec. 4.1. Then, F-16 is trained on general videos. At this stage, the image encoder stays frozen while other parts of the model are updated. To further verify the advantages of high-frame-rate modeling, the model is then fine-tuned on high-speed sports video. LoRA is adapted to the LLM and serves as the only trainable module in this stage. As for other training settings, F-16 is trained with Adam optimizer and cosine scheduler. The learning rate is set to 2e-5, the warm-up ratio is set to 0.03, and the batch size per device is 1. For general video training, 128 H100 GPUs are used for training for 1 epoch (about 13000 steps of updates), which takes about 35 hours. As for high-speed sports fine-tuning, 64 H100 GPUs are used for training 5 epochs (about 9000 steps of updates), which takes about 20 hours. For comparison, the FPS=1 model takes about 18 hours for general video training and 10 hours for high-speed sports fine-tuning. Note that the difference in training time primarily comes from two parts: a small portion is due to the increase in encoder duration, while the major part is the catastrophic growth in CPU time when reading more frames in long videos, which leads to less GPU usage. We have not optimized video frame extraction, and just use the Python Library"Decord" for frame extraction. Appropriate optimization or preprocessing can significantly improve training speed, such as extracting frames in advance. --- **2: What are the computational costs (memory footprint, inference time) of F-16 compared to other video LLMs? This information is important to understand the practical limitations of the proposed approach.** With the high-frame-rate aligner, though 16 times more visual frames are inputted, the tokens into the LLM remain the same as the low-frame-rate model. Therefore, the computational cost for LLM is the same as compared of the low-frame-rate one. The difference mainly comes from the encoder and the aligner. - Inference time. As compared in Fig. 3(a), the inference time of the proposed high-frame-aligner can be ignored compared with the LLM and the visual encoder, and the comparison to other models can mainly focus on the visual encoder. To our observation, the inference time for the visual encoder grows linearly with the input frame time. Therefore, the inference time of the visual encoder increases by 16x, which makes the total inference time of F-16 1x longer than that of the comparable video LLM with FPS=1. - Memory footprint. The memory cost can be divided into 4 parts: the final visual tokens, the hidden layer of the aligner, the output of the visual encoder, and the inner computation cost of the visual encoder. The final visual tokens are kept at the same number, which does not increase in memory cost. For the other 3 parts, though it seems that they will grow linearly along with the number of frames, they can be handled sequentially because of the independence between different processing windows. For instance, using a for-loop to process frames between processing windows. Therefore, there is no significant increase in memory cost observed from F-16.
null
null
null
null
null
null
null
null
A Peer-review Look on Multi-modal Clustering: An Information Bottleneck Realization Method
Accept (poster)
Summary: For the mentioned three limitations faced by most current weighted multimodal clustering methods, this paper, inspired by the peer-review mechanism in academia, iteratively considers one modality as the "author" and the remaining modalities as "reviewers" to obtain the peer-review score for each modality. To improve the reliability, a trustworthy score with a self-supervised working mechanism is further designed. Finally, a new PTIB method is proposed, and lots of experimental results show its effectiveness. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Lots of experimental results show its effectiveness over many compared methods. Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the problem. The proposed method iteratively considers one modality as the "author" and the remaining modalities as "reviewers" to obtain the peer-review score for each modality. Then, a trustworthy score with a self-supervised working mechanism is further designed to improve the reliability. The evaluation criteria such as datasets, evaluation metrics, compared methods in the experiments are frequently used in the community. Theoretical Claims: I have checked the correctness of the proofs for theoretical claims, including the theorem 3.3 and its proof in Appendix A.1. Experimental Designs Or Analyses: I have checked the soundness/validity of the experimental designs or analyses, including subsection 4.1, 4.2, 4.3 and 4.4 and its corresponding analysis. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: The proposed method is novel by integrating peer-review idea to address multi-modal clustering problem. Essential References Not Discussed: There are no related works that are not currently discussed in the paper. Other Strengths And Weaknesses: The authors are inspired by the peer-review mechanism in academia and propose a new and well-organized framework named PTIB method. The introduction, methods and experimental sections are articulated clearly. This paper is well written and the novelty is quite enough, and the problem is well addressed. However, there are also some weaknesses below: 1. There are many trustworthy multi-modal classification or clustering methods published recent years. This paper also mentioned the 'trustworthy'. What are the differences between this paper and existing trustworthy multi-modal classification or clustering methods? 2. Methodological limitations are only mentioned in Appendix B and are not summarized in the conclusion section. 3. Lack of description of the experimental environment, which may help readers to better reproduce the main parts of the methods. 4. The font size in some pictures is small, as shown in Figure 2. Other Comments Or Suggestions: Please see the above, and I have no other comments. I am looking forward to the authors’ reply to my comments, which may help me to give a final rating. Questions For Authors: Please see the above, and I have no other comments. I am looking forward to the authors’ reply to my comments, which may help me to give a final rating. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the insightful comments and constructive suggestions. We have carefully revised the whole manuscript and provided detailed responses to each point below. **1. There are many trustworthy multi-modal classification or clustering methods published recent years. This paper also mentioned the 'trustworthy'. What are the differences between this paper and existing trustworthy multi-modal classification or clustering methods?** ***Response:*** Thanks for your comments. Regarding the trustworthiness of multiple modalities, almost all existing methods [1-4] focus on trustworthy multimodal classification. To ensure the reliability of both the multi-modal integration and the final decision, the trustworthy multimodal classification methods produce a stable and reasonable uncertainty estimation for each modality and thus promote both classification reliability and robustness. For example, Han et al. [2] introduce the variational Dirichlet to characterize the distribution of the class probabilities, parameterized with evidence from different views and integrated with the Dempster-Shafer theory, thus promoting both classification reliability and robustness. Zheng et al. [3] propose a trustworthy multimodal classification network via multi-level confidence learning which integrates both feature and label-level confidence learning for trustworthy multimodal classification. Zou et al. [4] induce a transparent fusion strategy based on the modality confidence estimation strategy to track information variation within different modalities for dynamical fusion. Different from them, the proposed method aims to guarantee the trustworthy of the learned modal weights in a self-supervised manner. To the best of our knowledge, none of the existing weighted MMCs employ the trustworthy strategy in the weight learning process. [1] Z. Han, C. Zhang, H. Fu, and J. T. Zhou, Trusted multi-view classification, in Proceedings of the International Conference on Learning Representations, 2021, pp. 1–11. [2] Han, Z., Zhang, C., Fu, H., and Zhou, J. T. Trusted multi-view classification with dynamic evidential fusion. IEEE transactions on pattern analysis and machine intelligence,2023, 45(2), 2551-2566. [3] Zheng, X., Tang, C., Wan, Z., Hu, C., and Zhang, W. Multi-level confidence learning for trustworthy multimodal classification. In Proceedings of the AAAI conference on artificial intelligence,2023, pp. 11381-11389. [4] Zou, X., Tang, C., Zheng, X., Li, Z., He, X., An, S., and Liu, X. Dpnet: Dynamic poly-attention network for trustworthy multi-modal classification. In Proceedings of the 31st ACM International Conference on multimedia, 2023, pp. 3550-3559. **2. Methodological limitations are only mentioned in Appendix B and are not summarized in the conclusion section.** ***Response:*** Thanks for your comments. We will summarize the limitations of the proposed method in the conclusion section: The proposed method is also with some possible weaknesses. It is designed for fully aligned multi-modal clustering and complete multi-modal clustering, where none of the data samples across modalities are unaligned and missing or damaged. And it requires the number of clusters of the dataset in advance like almost all the existing multi-modal clustering methods. **3. Lack of description of the experimental environment, which may help readers to better reproduce the main parts of the methods.** ***Response:*** Thanks for your comments. All the compared methods and the proposed method are conducted in the same experimental environment, which is a desktop computer with Windows 10 operating system, 32GB RAM, and MATLAB 2021a. **4. The font size in some pictures is small, as shown in Figure 2.** ***Response:*** Thanks for your comments. We will use a more legible font size for the figures in the revised version. And we checked all the figures in the manuscript to ensure that they were clear and readable enough. Thanks again for the valuable suggestions provided by the reviewer. The modifications will be added to the final version.
Summary: In this paper, the authors propose a new multi-modal clustering method by information bottleneck method with an interesting peer-review look. This method work in a weighted mechanism with two learning scores, including peer-review and trustworthy score. It is noted that the weight learning process is conducted without parameter tuning, which is good for practical applications. Many experiments on benchmark datasets show the effectiveness and superiority of the proposed method. ## update after rebuttal The authors have addressd my all concerns and I have reviewed the comments from the other three reviewers. So, I have raised my score. Claims And Evidence: The claims made in the submission can be supported by clear and convincing experiments, which provide different levels of validation on the proposed method. Methods And Evaluation Criteria: The proposed methods and/or evaluation criteria make sense for the multi-modal clustering problem, especially the peer-review learning mechanism and many experimental results. Theoretical Claims: I have checked the correctness of proofs for theoretical claims in the submission, i.e., theorem 3.3 and its detailed proof. Experimental Designs Or Analyses: I have checked the soundness/validity of experimental designs or analyses in the submission. Many experiments on benchmark datasets show the effectiveness and superiority of the proposed method. Supplementary Material: There is no supplementary material for this paper. Relation To Broader Scientific Literature: The key contributions are: the authors propose a new multi-modal clustering method by information bottleneck method with an interesting peer-review look. It is new to the area. Essential References Not Discussed: Essential References are discussed in the paper. Other Strengths And Weaknesses: Strengths (1) novelty: novel for the ICML conference; notable advantages compared with state-of-the-art methods. (2) soundness: technically sound, under a very rigorous framework. (3) significance: the problem is significant and also the method may bring impact on the related community. Weaknesses (1) The peer-review look on multi-modal clustering is interesting, have the authors considered using this idea for other problems? The authors are encouraged to give some of them in the future work so that it may help the readers to explore more possibilities of this ideas into dealing with more problems. (2) Some equations in the figure are too small, which may influence the readability. (3) In section 2, it is not only to introduce IB method, some multi-modal clustering works using IB needs to be added here. Other Comments Or Suggestions: I have not any other comments or suggestions here. I have given all my comments in the “Other Strengths And Weaknesses”. Questions For Authors: I have not any other questions here. I have given all my comments in the “Other Strengths And Weaknesses”. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the insightful comments and constructive suggestions. We have carefully revised the whole manuscript and provided detailed responses to each point below. **1. The peer-review look on multi-modal clustering is interesting, have the authors considered using this idea for other problems? The authors are encouraged to give some of them in the future work so that it may help the readers to explore more possibilities of this ideas into dealing with more problems.** ***Response:*** Thanks for your comments. It is possible to try to migrate the idea of "peer review" to federated learning. Federated Learning is a distributed machine learning technique. Its core concept is to enable multiple participants (e.g., devices or institutions) to collaboratively train a globally shared machine learning model without sharing local data, thereby achieving data privacy preservation and collaborative model building. A critical step in this process is secure aggregation: clients upload their locally trained model updates to a central server, which then merges these updates using aggregation algorithms (e.g., FedAvg aggregates local model updates from multiple clients via weighted averaging) to generate a new global model. In this “peer-review” framework, each client can be viewed as an "author" submitting its update, while the remaining clients act as "reviewers" to score the quality of that update. If a client's update has a low score, it may indicate that the client is a low-quality client or an external malicious node. By leveraging these scores, the server can reduce the importance of low-scoring updates or exclude low-scoring updates during aggregation, ensuring the robustness of the global model. **2. Some equations in the figure are too small, which may influence the readability.** ***Response:*** Thanks for your comments. We will use a more legible font size for the figures in the revised version. And we checked all the figures in the manuscript to ensure that they were clear and readable enough. **3. In section 2, it is not only to introduce IB method, some multi-modal clustering works using IB needs to be added here.** ***Response:*** Thanks for your comments. Some newly added multi-modal clustering works using IB are in the following: In recent years, IB theory has been widely used in various multi-modal clustering tasks. For example, Federici et al. [1] proposed a multi-modal IB method that can identify non-shared information between two modalities. Yan et al. [2] proposed a multi-modal IB method that uses shared representations of multiple modalities to eliminate private information of a single modality. But the modality-private information is eliminated as much as possible during the process of data compression, only exploring the shared information of modalities without taking advantage of the complex relationship between modalities. Hu et al. [3] conduct the information bottleneck theory in the origin data, the learned features of high-dimensional and the learned features of low-dimensional to fuse the module information. However, its final clustering result is obtained by directly averaging the local clusters from the modal high-dimensional features. Different from the existing multi-modal clustering methods based on IB theory, the proposed method considers the complex relationship between modalities, where the designed multi-modal peer review is used to reasonably score the contribution of each perspective, and the self-supervised trustworthy score is used to ensure the reliability of the process. [1] Federici, M., Dutta, A., Forre, P., Kushman, N., and Akata, Z. Learning robust representations via multi-view information bottleneck. arXiv preprint arXiv:2002.07017, 2020. [2] Yan, X., Mao, Y., Ye, Y., and Yu, H. Cross-modal clustering with deep correlated information bottleneck method. IEEE Transactions on Neural Networks and Learning Systems, 2023. [3] Hu, J., Yang, C., Huang, K., Wang, H., Peng, B., and Li, T. Information bottleneck fusion for deep multi-view clustering. Knowledge-Based Systems, 2024, 289, 111551. Thanks again for the valuable suggestions provided by the reviewer. The modifications will be added to the final version.
Summary: Most existing methods in multimodal clustering have three core challenges on the trustworthiness, weight learning and parameter learning. Motivated by the peer-review mechanism, this paper solves the multimodal clustering problem and realizes mutual review of different modalities by rotating the roles of "author" and "reviewer" to explore the potential relationship. Moreover, a trustworthy score is further given in a self-supervised manner. By considering the two aspects, a new peer-review trustworthy information bottleneck method is proposed. Comparative experiments on eight multimodal datasets show the proposed method outperforms existing state-of-the-art methods. ## update after rebuttal The authors have addressed my most concerns and I keep my rating. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence, as demonstrated by the comparative experiments. Methods And Evaluation Criteria: After carefully checking the manuscript, the proposed methods and evaluation criteria make sense for the multi-modal clustering problem. The proposed peer-review trustworthy information bottleneck method is interesting, and comparative experiments on eight multimodal datasets show the proposed method outperforms existing state-of-the-art methods. Theoretical Claims: The correctness of Theorem 3.3 and its proof has been reviewed and found to be valid. Experimental Designs Or Analyses: The experimental design and analyses in Section 4 have been reviewed, and their soundness and validity are confirmed. Supplementary Material: No supplementary material. Relation To Broader Scientific Literature: The paper contributes to the broader literature by introducing a peer-review-based multimodal clustering framework, which has not been extensively explored in previous works. The trustworthiness score and information bottleneck approach further enhance its novelty. Essential References Not Discussed: The introduction sufficiently covers essential related works, but additional discussion on the differences between multimodal and multi-view clustering would provide better context for its contributions. Some information theory based methods could also be reviewed, e.g., Dual Contrastive Prediction for Incomplete Multi-View Representation Learning, TPAMI 2023. Other Strengths And Weaknesses: This paper proposes a novel multi-modal clustering method grounded in robust theory and practical applicability. The paper has a clear structure. The proposed method is validated with well-designed experiments. The paper also has some limitations: 1. Lack of discussion on existing trustworthy multimodal clustering approaches. Are there prior weighted multimodal clustering methods that incorporate trustworthiness in weight learning? 2. Unclear details regarding implementation of baseline comparisons. Where was the code for the compared methods obtained? A fair comparison is necessary to validate the performance improvements. 3. Many cited works focus on multi-view clustering, while this paper targets multimodal clustering. What are the key differences, and how do multimodal clustering methods differ from multi-view clustering approaches? The paper should include recent works on multimodal clustering for a more comprehensive discussion. Other Comments Or Suggestions: NA Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the insightful comments and constructive suggestions. We have carefully revised the whole manuscript and provided detailed responses to each point below. **1. Lack of discussion on existing trustworthy multimodal clustering approaches. Are there prior weighted multimodal clustering methods that incorporate trustworthiness in weight learning?** ***Response:*** Thanks for your comments. Regarding the trustworthiness of multiple modalities, almost all existing methods [1-4] focus on trustworthy multimodal classification. To ensure the reliability of both the multi-modal integration and the final decision, the trustworthy multimodal classification methods produce a stable and reasonable uncertainty estimation for each modality and thus promote both classification reliability and robustness. For example, Han et al. [2] introduce the variational Dirichlet to characterize the distribution of the class probabilities, parameterized with evidence from different views and integrated with the Dempster-Shafer theory, thus promoting both classification reliability and robustness. Zheng et al. [3] propose a trustworthy multimodal classification network via multi-level confidence learning which integrates both feature and label-level confidence learning for trustworthy multimodal classification. Zou et al. [4] induce a transparent fusion strategy based on the modality confidence estimation strategy to track information variation within different modalities for dynamical fusion. Different from them, the proposed method aims to guarantee the trustworthy of the learned modal weights in a self-supervised manner. To the best of our knowledge, none of the existing weighted MMCs employ the trustworthy strategy in the weight learning process. [1] Z. Han, C. Zhang, H. Fu, and J. T. Zhou, Trusted multi-view classification, in Proceedings of the International Conference on Learning Representations, 2021, pp. 1–11. [2] Han, Z., Zhang, C., Fu, H., and Zhou, J. T. Trusted multi-view classification with dynamic evidential fusion. IEEE transactions on pattern analysis and machine intelligence,2023, 45(2), 2551-2566. [3] Zheng, X., Tang, C., Wan, Z., Hu, C., and Zhang, W. Multi-level confidence learning for trustworthy multimodal classification. In Proceedings of the AAAI conference on artificial intelligence,2023, pp. 11381-11389. [4] Zou, X., Tang, C., Zheng, X., Li, Z., He, X., An, S., and Liu, X. Dpnet: Dynamic poly-attention network for trustworthy multi-modal classification. In Proceedings of the 31st ACM International Conference on multimedia, 2023, pp. 3550-3559. **2. Unclear details regarding implementation of baseline comparisons. Where was the code for the compared methods obtained? A fair comparison is necessary to validate the performance improvements.** ***Response:*** Thanks for your comments. The codes of the all compared methods are obtained from the GitHub websites published in the original papers, and their parameter settings also strictly follow the implementation details in the original papers to ensure correct reproduction. All the compared methods and the proposed method are conducted in the same experimental environment, which is a desktop computer with Windows 10 operating system, 32GB RAM, and MATLAB 2021a. **3. Many cited works focus on multi-view clustering, while this paper targets multimodal clustering. What are the key differences, and how do multimodal clustering methods differ from multi-view clustering approaches? The paper should include recent works on multimodal clustering for a more comprehensive discussion.** ***Response:*** Generally, the aim of multi-view clustering and multimodal clustering is similar especially in integrating different sources of information for improving clustering performance. The differences between them are as follows: Multi-view learning focuses on diverse feature representations of the same object, while multi-modal learning deals with complex relationships between heterogeneous modalities, which is more complicated to handle. In practical applications, overlaps between them may exist (e.g., multi-modal data can also be considered as generalized multi-view data). However, technical solutions should be selected based on data characteristics (feature homogeneity and semantic consistency) to ensure methodological compatibility. In this paper, we use a more general expression of multi-modal clustering instead of multi-view clustering. In the final version, we will include and discuss more recent works on multi-modal clustering. Moreover, we will also discuss the relationships and differences between them as well as some information theory based methods, e.g., 'Dual Contrastive Prediction for Incomplete Multi-View Representation Learning', in the final version. Thanks again for the valuable suggestions provided by the reviewer. The modifications will be added to the final version.
Summary: This paper proposes a new peer-review trustworthy information bottleneck method. It designs a multimodal peer review process, in which the modality will iteratively act as an "author" or "reviewer" to conduct peer review to explore the potential relationship, which is quantified as the peer-review score; and the trustworthiness of the modality as a "reviewer" is judged in a self-supervised manner. Extensive experiments on 8 datasets show that the proposed method is superior to existing cutting-edge methods in terms of clustering performance indicators. Claims And Evidence: The claims in the manuscript are supported by the clear and convincing evidence from the extensive experiments and the results from different aspects. Methods And Evaluation Criteria: The proposed methods and/or evaluation criteria (including the datasets, compared methods, performance indicators) make sense for the problem. Theoretical Claims: I have checked the correctness of proofs for theorem 3.3 and its proof in the manuscript and appendix. Experimental Designs Or Analyses: I have checked the soundness/validity of the experimental designs or analyses from the Sec 4. Supplementary Material: No supplementary material. Relation To Broader Scientific Literature: This paper proposes a new peer-review trustworthy information bottleneck method, which is the key contribution of this manuscript. Essential References Not Discussed: None Other Strengths And Weaknesses: The authors provide a clear explanation of the proposed method, enhancing the reader's understanding of its importance. The paper presents a clear motivation and is well-structured, which also offers some valuable insights. My comments are shown as: First, generally, in a peer-review process, an EiC is also involved. I am interesting about which part stands for the EiC role in the clustering process. Second, the author should detail the ‘k-means-like draw-and-merger algorithm’ in Sec 3.5 using one or two sentences to enhance clarity and readability. Other Comments Or Suggestions: Refer to the weakness Questions For Authors: Refer to the weakness Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the insightful comments and constructive suggestions. We have carefully revised the whole manuscript and provided detailed responses to each point below. **1. Generally, in a peer-review process, an EiC is also involved. I am interesting about which part stands for the EiC role in the clustering process.** ***Response:*** Thanks for your comments. Indeed, an EIC is also involved in an academic peer-review process and we argue that both the EIC and AE can play the same supervisory role. This is consistent with our intention that use the final clustering result to ensure the trustworthiness of the peer-review score in a self-supervision fashion. In the submitted manuscript, AE is mentioned to help readers better understand how to self-supervise and evaluate the trustworthiness of the multimodal peer-review process. In summary, the final clustering result can be regarded as EIC/AE. **2. The author should detail the ‘k-means-like draw-and-merger algorithm’ in Sec 3.5 using one or two sentences to enhance clarity and readability.** ***Response:*** Thanks for your comments. The detail of the ‘k-means-like draw-and-merger algorithm’ is that Each sample is sequentially drawn from the old cluster and assigned to an optimal new cluster that minimizes the merger cost to maximize the objective function. In the following, we will outline the draw-and-merger algorithm and the optimization process of k-means, demonstrating why the draw-and-merger algorithm works like k-means. The optimization process of $k$-means mainly includes the following steps: (a) Initialize $k$ centroids by randomly selecting $k$ samples. (b) Assign each data point to the optimal new cluster corresponding to the nearest centroid, which will reduce the $k$-means loss (i.e. Sum of Squared Errors, SSE), and recalculate the centroid of each cluster. (c) Loop through step (b) until convergence or the maximum number of iterations is reached. The draw-and-merger algorithm mainly includes the following steps: (1) Initialize $k$ clusters by randomly assigning samples. (2) Sequentially reassign each data point to the optimal new cluster corresponding to the minimum merger cost. (3) Loop through step (2) until convergence or the maximum number of iterations is reached. Obviously, the steps of both algorithms are similar, and their key step is to reassign each data point to the optimal new cluster. The difference is that the draw-and-merger algorithm reduces the computational complexity by formalizing the merging loss. Thanks again for the valuable suggestions provided by the reviewer. The modifications will be added to the final version.
null
null
null
null
null
null
Disturbance-based Discretization, Differentiable IDS Channel, and an IDS-Correcting Code for DNA Storage
Reject
Summary: The authors propose THEA-Code, a IDS-correcting code for storing DNA, where the codes are subject tor insertion, deletion, and substitution errors. Their approach has two main components: first, they train a differentiable model to simulate the IDS channel. Using the trained channel, they additionally train an auto-encoder with Gumbel-Softmax discretization, which is able to reconstruct the DNA, even when the codewords are corrupted. Claims And Evidence: 1. The authors claim "commendable performance" of their approach. This is supported by their experiments showing consistent improvements over two pieces of previous work in Table 5. 2. The authors claim their method works in realistic settings, and this was demonstrated using a simulated channel called MemSim. However, this remains a simulated setting, and I am not sure if a more realistic setting is available. 3. The authors claim Gumbel softmax is much better than vanilla softmax, and provided an ablation study in Figure 3. Here, the authors show Gumbel softmax produce lower entropy, which corresponds to better performance. Here, I am not sure if simply adjusting the temperature of the softmax (which in turn reduces entropy) will achieve the same thing. Methods And Evaluation Criteria: 1. The authors only use NER as the metric, which makes sense in this case. However, I am not sure if there are common and more advanced metrics in this area. 2. The authors experiment with a variety of channels at different code rates, which shows the method's robustness. Theoretical Claims: No. Experimental Designs Or Analyses: I checked the experiments in the main paper. Supplementary Material: No. Relation To Broader Scientific Literature: The authors argue that previous work also uses auto-encoder-based codes (Baldi et al.); however, they do not have the simulated IDS channel, which make it harder to take advantage of the specific error profile. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths 1. The proposed method significantly outperform previous methods. 2. The authors conduct ablation studies for Gumbel Softmax, as well as testing their approach in various setups. 3. The authors provide detailed analyses in their Appendix. Weaknesses: 1. The intuition for Gumbel Softmax is a bit unclear to me (I might be missing something). The authors explain that they adopted Gumbel Softmax so that it will "constrain the logits x to produce one-hot-like probabilitiy vectors" (also shown through their theorem). However, if the goal is to simply produce sharp distributions, this can be achieved by adjusting the softmax temperature. 2. Lacking an ablation study and detailed analysis for the simulated IDS channel. It would interesting to see how the system performs without it, and how accurate the simulated IDS channel is. (Correct me if I missed these.) 3. The error profile seems limiting. The simulated IDS channel takes an error profile vector, which is a simple statistics over types of errors encountered. However, it is unclear how a simple vector can summarize more complex errors. Other Comments Or Suggestions: N/A. Questions For Authors: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely thank the reviewer for their valuable efforts. We will revise the manuscript accordingly. We hope our rebuttal has addressed the concerns.** **Q1**: Is a more realistic setting than MemSim available? **A1**: Firstly, a simulated channel is essential for training such a model, as alternating between training epochs and wet-lab experiments is neither time- nor resource-efficient. As far as we know, MemSim is currently a most advanced option available. It utilizes base-context-related statistics to simulate the biochemical process. In reality, DNA storage channels vary significantly depending on the specific methods, equipment, etc. As a result, users may prefer to train codes tailored to their own channels rather than relying on a universal simulated channel. This is also why we present our work as a generalizable method rather than just a standalone model. In future research, a generative model like VAE, in combination with the differentiable IDS channel, could be explored for a NN-based simulation. **Q2**: If the goal is to simply produce sharp distributions, will simply adjusting the temperature of the softmax will achieve the same thing?. **A2**: This is an insightful thought. Our initial attempt involved adjusting the softmax temperature, but this alone was not the optimal choice. In training phase, we want to progressively sharpen the encoder’s output distribution while ensuring the decoder remains sensitive only to the maximum entry of the distribution. Applying low-temperature softmax in training may hinder convergence. If the autoencoder has not yet learned meaningful features, an overly sharp distribution can prevent it from reaching a non-trivial solution. Experiments across different settings showed that while convergence is possible, it requires careful tuning of $t$, making training less robust. Some results under the default setting are listed below: | | Gumbel Softmax | Softmax t=0.1 | t=0.2 |t=0.3| t=0.4 | t=0.6 | |---:|:---:|:---:|:---:|:---:|:---:|:---:| | NER | 1.06 | 6.10 | 1.54 | 3.91 | 1.89 | 21.00 | | Entropy | 2e-5 | 4e-4 | 2e-3 | 2e-4 | 1e-3 | 0.08 | It suggests that adjusting the softmax temperature can help generate low-entropy codewords, but the functionality of the codewords is compromised compared to the proposed disturbance constraints. Beyond applying fixed low-$t$ softmax, we also explored alternative approaches, including: random $t$ softmax, $\sin t$ softmax, and optimizing an entropy constraint on the distributions. Among these, the entropy constraint also worked, but similar to adjusting the softmax $t$, its weight $\lambda$ required careful tuning to balance distribution sharpening, model convergence, and preventing the decoder from exploiting soft distributions. **Q3**: It would be interesting to see how the system performs without the simulated IDS channel, and how accurate the simulated IDS channel is. **A3**: We would like to thank the reviewer for this insightful comment. Without the simulated channel, the entire framework would not function, as the neural-network-based channel is essential for back-propagating gradients to the encoder. If we bypass the IDS channel using a straight-through approach, the task degenerates into a trivial copy-and-paste operation with 100% accuracy. This has been confirmed in unreported experiments by setting the channel error rate to 0. We evaluated the accuracy of the simulated IDS channel (DIDS) by comparing it to the groundtruth produced by the conventional IDS channel (CIDS). Part of the results are as follows: | CH Err | 0 | 1% | 5% | 10% | 20% | 30% | 40% | |---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | CIDS==DIDS | 100 | 99.8 | 99.4 | 99.1 | 94.2 | 66.6 | 41.5 | We found that the learned channel is reliable for simulating channels with an error rate of less than 20%. We will append a section in App C to include detailed results from these experiments. **Q4**: The simulated IDS channel takes an error profile vector, which is a simple statistics over types of errors encountered. However, it is unclear how a simple vector can summarize more complex errors. **A4**: The error profile records the errors that occur in a sequence. For instance, given ATGGC and an error profile of (Ins C, Ins T, 0, Del, 0,0,0), the resulting sequence would be **CT**A~~T~~GGC. This strategy covers all possible error types. We will add an App Sec on how the profile is defined. In the simulated channel the profile is DNA-depended. The differentiable IDS channel faithfully transforms the sequence according to the error profile. Thus, the channel is fully defined by how the profile vector is generated. Context-free channels, such as C111, generate error profiles based on preset probabilities, while MemSim is sequence-dependent, generating profiles based on DNA sequences by sampling from $P(profile|DNA)$. The right column of Line 371 in the manuscript describes this part in detail.
Summary: They proposed a universal method for designing tailored IDS-correcting codes across varying channel settings. 1. They propsed a disturbance-based discretizationto discretize the features of the autoencoder, which applies a Gumbel SoftMax to code the the alphabet {A, T, G, C}. 2. A simulated differentiable IDS channel is developed as a differentiable alternative for IDS operations, which is the key to address IDS or DNA-related problems using deep learning methods. Claims And Evidence: convincing Methods And Evaluation Criteria: make sense Theoretical Claims: Probably correct Experimental Designs Or Analyses: soundness Supplementary Material: all Relation To Broader Scientific Literature: DNA storage, ECC Essential References Not Discussed: N/A Other Strengths And Weaknesses: First of all, I'm not an expert in DNA storage, but I applaud the author's contribution to using the Transformer model in this area. Strength 1. The article is readable, even for people unfamiliar with the topic. 2. This is a valuable field, and this work is the first to model it using Transformers. Weaknesses 1. This paper lacks a clear benchmark, including dataset Settings and error type distribution Settings. The author should explain how the dataset was constructed in the experiments section so that the researcher can follow along. 2. Although the field of DNA storage lacks corresponding benchmarks, authors should consider comparing with similar methods in the field of ECC: [1] Choukroun, Yoni, and Lior Wolf. "Error correction code transformer." Advances in Neural Information Processing Systems 35 (2022): 38695-38705. [2] Wang, Hanrui, et al. "Transformer-QEC: quantum error correction code decoding with transferable transformers." arxiv preprint arxiv:2311.16082 (2023). Other Comments Or Suggestions: 1. I suggest moving Figure 2 to page 5 for easy viewing. In addition, Figure 2 lacks explanation for some belonging, such as "sink". 2. Authors should consider showing the structure of the model in detail, even if the author provides the code. 3. The modeling approach in the field of ECC is similar; however, can you explain why conventional approaches do not address IDS-correcting codes across varying channel settings? The author should add context and meaning to this section. 4. The link to the code provided by the author seems to be cancelled and I can't view it. Questions For Authors: 1. As the proposed model is designed to handle source sequences and codewords of constant lengths, is it possible to process a short one with paddings? 2. line 238:"Particularly, when imposing constraints to enforce greater discreteness in the codeword, the joint training of the encoder and decoder resembles a chicken-and-egg dilemma, where the optimization of each relies on the other during the training phase." is not clear, could you provide more details? 3. I noticed that the dataset is a randomly generated sequence, however the authors did not specify the exact rules. Is any sequence allowed, and how is an error operation defined? I expect the authors to state the definition of allowed sequences as well as visualizations in the previous sections. 4. When using both combinatorial codes for correcting a single error and a burst of errors, is the proposed method competitive? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely thank the reviewer for their valuable comments. We hope our rebuttal has adequately addressed the concerns. Minor concerns not mentioned will also be revised.** **Q1**: The code is empty. **A1**: This appears to be a cache issue with the anonymous hosting platform, as several similar cases have also been reported on their GitHub page. It has now been fixed and is accessible at the same link: https://anonymous.4open.science/r/THEACode . Nonetheless, it’s our fault for not thoroughly verifying the availability of the code. **Q2**: How the dataset/profile was constructed and error operation defined? **A2**: The DNA sequences are randomly generated with equal probabilities for ATGC, no inherent patterns. The error profiles are constructed according to the respective channel settings. For instance, under the default setting, each profile position undergoes an Ins, Del, or Sub with equal probability (Err_Rate/3), and Ins/Sub are further distributed equally among bases. For the simulated realistic channel MemSim, we use its official implementation to generate the output sequence $s’$ from the input $s$, then infer an error profile $p(s,s’)$. We will add an App Sec for the error profile. Here's a brief example: given the sequence ATGGC and an error profile of (Ins C, Ins T, 0, Del, 0,0,0), the resulting sequence would be **CT**A~~T~~GGC. **Q3**: Comparing with similar NN-based ECC. **A3**: We follow the research trend of NN-based ECCs, which focus on linear codes such as LDPC. However, correcting errors in AWGN channels is fundamentally different from handling IDS errors. Ins and Del shift the entire sequence, making them inherently unsuitable for linear codes. Transplanting these methods to IDS correction would likely face the same challenges addressed in this manuscript. On the other hand, directly applying such approaches to IDS errors would not differ significantly from using conventional linear codes, which has been explored in very early DNA storage research with unsatisfactory results. **Q4**: Why conventional approaches do not address varying channel settings? **A4**: We believe this is due to two main reasons: + In conventional ECC research, handling complex AWGN channels has not been a primary focus, as such complexity is less critical than in DNA storage. As evidence, although NN-based ECCs offer advantages in complex channels, most existing works do not emphasize this capability. + IDS correction follows a different approach from conventional ECCs. Even for the simplest case of correcting a single IDS error, the mathematical foundations remain open, and an optimal code has yet to be established. Designing more advanced IDS codes for complex channels thus remains a challenging open problem. **Q5**: Is it possible to process a short codewords with paddings? **A5**: Padding is not explicitly described but is actually used throughout the work. This is necessary because synchronization errors (Ins/Del) alter the sequence length. Applying variable-length sequences is an interesting topic. It raises the question of whether the model would learn individual codes for different lengths or a consistent code that accommodate both short and long sequences. This model was trained with fixed-length sequences, as sequence length is a key prior knowledge for correcting synchronization errors. For example, without the knowledge of codeword length, a broken codeword of length n could originate from either a length n-1 codeword with an Ins, a length n codeword with Subs, or a length n+1 codeword with a Del. Multiple errors would further complicate this scenario. We infer that variable-length sequences would increase the task's difficulty significantly. Consider this, it may require extensive research in future work to answer this question. **Q6**: More details on Chicken-and-Egg dilemma. **A6**: The dilemma arises from the interdependence of the encoder and decoder during training. Specifically, when using discreteness constraints, if the encoder converges prematurely to a local minimum due to these constraints, the entire framework fails to function properly. To mitigate this, we introduce the auxiliary task, which serves as a logical "warm-up" for the encoder. This task is simple yet effective, as in App D. **Q7**: Competitive to combinatorial codes in correcting single error? **A7**: The accuracy cannot surpass combinatorial codes in this scenario, as they are mathematically guaranteed to correct a single error. However, from our other research efforts, it’s found that a NN-based decoder can 100% decode the combinatorial codewords. To evaluate whether the end-to-end method competitive to combinatorial code in single IDS channel, we conducted experiments with a code rate aligned to Cai’s code at 34/50 and 133/150. The reported **NER is: 1.6%, 2.1%**, respectively, inferior to the combinatorial code, although the THEA-Code performs far better in correcting multiple errors.
Summary: This work proposes THEA-code, an auto encoder for learning IDS-correcting codes. It does this in two stages: (1) learning a differentiable IDS channel from a ATGC sequences from CIDS, and then (2) using the learned IDS channel to train an auto encoder to automatically learn a IDS-correcting code. Claims And Evidence: The claims are supported with experimental results. Methods And Evaluation Criteria: Mostly. One question I had is whether any real genomic datasets are used, or if random AGTC sequences are drawn for the experimental sections. I am not an expert on genomic data, so I do not know if the AGTC sequences may have memory or can be assumed iid. To be more convincing, I think the experiments should include error correction performance benchmarks on real-world genomic data. Theoretical Claims: I think the theoretical result (Theorem 3.1) needs some work. The theorem statement itself is vague and could benefit from a precise mathematical description. This would help the reader understand what is being proved in the proof. The current theorem statement reads more like a remark. Regarding the proof I do not know if both the $\epsilon_1, \epsilon_2$ in the proof are small (as it says that either $y_1$ or $y_2$ is less than $\epsilon$, but the final bound on $\pi_1$ is in terms of both $\epsilon_1$ and $\epsilon_2$), so it is hard to tell whether sparsity is achieved. Perhaps the theorem statement would make sense if it relates the sparsity of $\pi$ to the convergence tolerance $\epsilon$. Additionally, there is no mention of a full proof anywhere for general $\tau$ and more than 2 logits. Experimental Designs Or Analyses: The experimental results seem overall sound and valid. One concern I had was the codeword length is always fixed to 150. Why not make the model's rate fixed and instead output codewords that are length $\ell_s$/(code rate)? This would seem more useful in practice. Also, it raises the concern that the model is overfitted on a very specific code worth length. Is the model able to generalize to different codeword lengths? i.e., if I wanted to use a rate of 0.50 but a source sequence of 300? This also raises a question of how the comparison methods (DNA-LM, Cai, and HEDGES) operate. Is the comparison across all methods done using the same exact source sequence(s), resulting in the the same length codeword length for all methods (at a fixed code rate), and then comparing the error rates? If not, I believe there may be slight unfairness in the comparison. In any case, this should be mentioned. Supplementary Material: Yes; however, the code link's files are empty. I don't find this as important currently as the claims and presentation in the paper. Relation To Broader Scientific Literature: I am not an expert in DNA coding. However, the approach seems novel (using the 3-simplex to represent soft versions of the ACTG symbols, and then learning transformer-based models for both the channel and the error correcting code). The authors have included a fairly extensive literature review for deep learning applied to coding theory, which I believe is the closest area of research to this work. Essential References Not Discussed: To the best of my knowledge, no. Other Strengths And Weaknesses: Aside from what was mentioned above, I think the paper is well-written and easy to follow. Some other recommendations I would have are: - putting some experiments regarding the accuracy of the learned IDS channel in the main text - including a diagram of some of the IDS operations when discussing the differentiable channel in section 4. Showing how the probability vector representations are represented throughout the DIDS and CIDS with a working example (say, for insertion and deletion) would be very helpful to the reader. My main concern with the paper is Theorem 3.1 (see above). I also feel it is a bit disconnected to the rest of the paper, as it is never discussed later on in the experiments (i.e., to verify that sparsity is achieved). The core contributions seem to really be in learning the channel, and then using the differentiable channel to learn the code. Other Comments Or Suggestions: I wonder if some sort of "adversarial" training could be done, where the learned channel and error correcting code are adversaries, and they are both learned from scratch. This would alleviate the need to pretrain the IDS channel on IDS channel inputs and outputs. Questions For Authors: Please see the above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **We sincerely thank the insightful feedback, which is invaluable in improving our manuscript. As this is the only negative score, we genuinely hope our rebuttal has addressed all the concerns and that the reviewer may reconsider the score.** **Q1**: The code is empty. **A1**: This appears to be a cache issue with the anonymous hosting platform, as several similar cases have also been reported on their GitHub page. It has now been fixed and is accessible at the same link: https://anonymous.4open.science/r/THEACode . Nonetheless, it’s our fault for not thoroughly verifying the availability of the code. **Q2**: Any genomic datasets are used? Random AGTC sequences? **A2**: We use random AGTC sequences, as DNA molecules serve as a memoryless medium for storing arbitrary information in DNA-based information storage. The use of genome-style or bio-compatible DNA sequences for **in vivo** storage has been explored (see [1]). However, in this **in vitro** storage, where DNAs exist in dry powder form, sequence constraints are relaxed. Typically, only patterns that are difficult to synthesize or sequence should be noticed, which is actually a property of the IDS channel (i.e., patterns introducing higher error rates) and should be in charge of channel modeling. When storing genomic information in DNA molecules, genomic knowledge is also unnecessary. The data is usually compressed **in silico** before storage, maximizing entropy and eliminating inherent sequence patterns. [1] An artificial chromosome for data storage, NSR **Q3**: Thm 3.1 needs some work. Proof for general $\tau$ and $n$ logits. **A3**: We will revise Thm 3.1 and its proof for clarity. Specifically, $\epsilon_1$ is similarly to the convergence tolerance, while $\epsilon_2$ accounts for the chance that $y_1$, as a sample from a distribution, deviates beyond this tolerance. Additionally, we will **provide a full proof** in the Appendix for general $\tau$ and $n$ logits, in about 1.5 extra pages. The proof follows the sketchy proof but includes additional details and a trick involving the mean value theorem for multivariable functions. **Q4**: Codeword length, why not fix source length? Still work at 300/600? **A4**: In DNA storage, the sweet point of molecule length is around 150 due to biochemical limitations. Shorter lengths require additional indexing resources, while longer lengths are currently neither time- nor cost-effective; excessively long synthesized DNA sequences accumulate high error rates. Therefore, we conducted experiments by fixing the codeword length rather than fixing the source length. As suggested, we had experiments with both shorter and longer codeword lengths at settings 25/50 and 300/600. Under a 1% error channel, training the code for 25/50 was much easier, while training for length 600 was relatively challenging and resulted in an inferior NER, as in ||25/50|75/100|300/600| |-|:-:|:-:|:-:| |NER|0.37|0.46|5.16| We acknowledge that the proposed method is not applicable to arbitrarily long sequences, as it relies on a plain Transformer, which is computationally impractical for very long inputs. However, encoding long DNA is not a currently urgent priority. Future work may explore more efficient Transformer variants for this purpose. **Q5**: Fairness of comparison. Source/Codeword length settings? **A5**: Yes, the comparison is not entirely fair, primarily because the compared methods use discrete settings. The compared source/codeword lengths are presented in Tab 7 App A. The most unfair setting is for Cai’s code, which uses smaller codeword lengths to align the code rate. However, in this case, Cai's accuracy is overrated rather than underestimated. This code is reliable for correcting single error but fails to correct multiple errors. Shorter codeword reduces the likelihood of encountering multiple errors in a channel with fixed error rates. **Q6**: Accuracy of the learned IDS channel. **A6**: We would like to thank the reviewer for this insightful comment. We evaluated the accuracy of the learned IDS channel (DIDS) by comparing it to the groundtruth produced by the conventional IDS channel (CIDS). Part of the results are: |CH Err|0%|1%|5%|10%|20%|30%|40%| |-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |CIDS==DIDS|100.0|99.8|99.4|99.1|94.2|66.6|41.5| It suggests that the learned channel is reliable for simulating channels with error rate less than 20%. We will include the detailed results from these experiments. **Q7**: Theorem 3.1 is a bit disconnected to the rest of the paper, as sparsity is not discussed later on in the experiments. **A7**: The entropy of codewords, which directly reflects sparsity or discreteness, was recorded in Sec 6.1 and App B to illustrate the effect of Thm 3.1. The reviewer may have missed these parts, which is our fault due to the small text used after zooming out the figures for page limit. We will revise this in the updated version. **All other suggestions will be taken into account.**
Summary: This paper presents THEA-code, an end-to-end autoencoder-based model for an IDS-correcting code. Extensive experiments demonstrate that THEA-Code can adapt effectively to various IDS channel conditions and outperforms existing IDS-correcting codes on simulated and realistic DNA storage channels. THEA-Code especially reduces error rates for realistic DNA storage channel conditions. Claims And Evidence: Yes; to the best of my knowledge, there are not any problematic claims. Methods And Evaluation Criteria: Yes; nucleobase error rate (NER) was evaluated on C111, C253, and MemSim across multiple prior methods and THEA-code. Theoretical Claims: The theorems presented seemed correct, although sometimes hard to follow if all variables were not explicitly defined or explained in the previous text. Experimental Designs Or Analyses: The autoencoder design and adaptation for the task seemed valid. Supplementary Material: I looked most closely at the comparison experiments in part A. Relation To Broader Scientific Literature: The use of a deep learning-based autoencoder mainly distinguishes this paper from previous work. Essential References Not Discussed: I don't think any essential references are missing. However, I think readers unfamiliar with the area might feel there is a slight lack of background on DNA storage, IDS, etc. and what the model task is in the introduction. Other Strengths And Weaknesses: Given that THEA-code appears to be the first end-to-end autoencoder framework for this task, along with extensive explanations it seems like there is sufficient originality and Other Comments Or Suggestions: Page 2 has the typo "distrubance" instead of "disturbance." Page 7 has the typo "apperent" instead of "apparent." Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely thank the reviewer for their valuable efforts. We will revise the manuscript accordingly. We hope our rebuttal has addressed the concerns.** **Q1**: The theorem is sometimes hard to follow. **A1**: We will revise the main text of Theorem 3.1 for clarity. Additionally, we will provide a full proof in the Appendix for general $\tau$ and $n$ logits, in about 1.5 extra pages. The full proof follows the structure of the sketchy proof but includes additional details and a trick involving the mean value theorem for multivariable functions. **Q2**: Readers unfamiliar with the area might feel there is a slight lack of background on DNA storage. **A2**: We agree with the reviewer’s concern. Initially, we had such a paragraph on introducing DNA storage before the second paragraph in the introduction, but it was removed due to page limits. We will reintroduce this paragraph explaining the DNA-based information storage pipeline to provide better background for readers unfamiliar with the area. **Q3**: typos such as "distrubance", "apperent", etc. **A3**: We apologize for these typos and will correct them in the revised version. Additionally, we will conduct a thorough review of the manuscript.
null
null
null
null
null
null
Optimal Survey Design for Private Mean Estimation
Accept (poster)
Summary: This paper studies how to estimate a population mean from surveys collected from different groups of people. Differential privacy is required at the level of each group At a high level, the mechanism randomly samples users from each group, who then send in their responses plus noise. The population mean is then a weighted average of the received responses. There are free parameters of the algorithm, namely the number of users to sample in each group, and depending on the variance estimate, these parameters can be optimized. Specifically, for three common types of noise distributions, the number of users for each group is a convex optimization problem, and can be selected using convex optimization. For special parameter choices, the paper derives closed-form solutions for this optimization problem, and in general, they show that the optimal solution can be found using exhaustive search in k dimensions (k is the number of groups). They demonstrate experimentally, the optimization procedure can reduce variance by a factor of 2-4, and the searching algorithm runs efficiently for up to moderate (<30) values of k. Claims And Evidence: Most of the claims in the paper are adequately supported by theorems and experimental evidence. There is one claim I feel needs more evidence, which is the local differential privacy guarantee. Methods And Evaluation Criteria: The methods and evaluation criteria are adequate. Theoretical Claims: I did not closely check the theoretical claims, but they seem reasonable. Experimental Designs Or Analyses: I did not check the experimental designs, but they seem reasonable. Supplementary Material: I did not check the supplementary material. Relation To Broader Scientific Literature: This paper fits in both the differential privacy and statistics literature. It continues a trend towards adding differential privacy to fundamental statistical methods, including stratified sampling, the setting considered here. Essential References Not Discussed: I cannot think of essential references not already discussed. Other Strengths And Weaknesses: The algorithms in the paper are simple and easy to implement. This applies to both the estimation scheme, which is based off of simple local DP methods, and to the optimization algorithm, where the authors are very clear about how the various convex optimization solvers are being used. A negative about the work is that it requires lots of prior knowledge to be known about the sample, namely their variances. It is somewhat common to make such assumptions in statistics, but it seems a bit more complicated in the privacy setting since it requires knowing something a priori about the private data. Other Comments Or Suggestions: None Questions For Authors: There is currently no quantitative explanation of the local differential privacy guarantee of the algorithm. I believe it should be a factor of $$N_i / n_i$$ higher than the central DP guarantee. This seems like it could be quite high; are there examples where the sampling probability is not too low (like >0.3 for example)? Do we gain anything statistically by taking a large fraction of samples? When can we expect to have accurate estimates of variances in each group in a private setting? Can we use existing work to privately measure the population variance under DP? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for recognizing that our work contributes to the growing trend of incorporating differential privacy into fundamental statistical methodologies, particularly in stratified sampling. - Regarding the need for prior knowledge of variances, we agree that this is a limitation. However, in practice, statisticians can often estimate variances using historical data or prior knowledge. Alternatively, a portion of the privacy budget could be allocated for variance estimation. We will conduct a sensitivity analysis through simulations to assess the impact of mild variance misspecifications, ranging from 10-20%. - **In Proposition 3.4**, the nominal privacy budget $M_i$ is given by $$\log \left( \frac{\exp(\epsilon/\Delta f) - 1 + q_i}{q_i} \right)-DP$$ where $q_i = \frac{n_i}{N_i}$. We sincerely apologize for the omission of "-DP" in our submission, which may have led to confusion. We have corrected this in our revised version. Indeed, when the subsampling rate $\frac{n_i}{N_i}$ is relatively small, the nominal budget can be quite large. Nevertheless, our primary goal is to ensure central DP while providing some additional protection against the data curator through the (weaker) local-DP guarantee. There could certainly be settings where the sample size is a moderate proportion of the population size, and having a larger sample would improve the statistical estimation by reducing the variance. - Accurate variance estimation within each group can be feasible given historical data or prior knowledge. Alternatively, a portion of the privacy budget could be allocated to estimate population variances during an initial phase.
Summary: The authors propose a DP stratified sampling scheme that can be optimized for various objectives such as population mean estimation or an A-optimal design. The main contributions are a general algorithm to solve the mixed-integer programming problem this creates, as well as closed-form solutions for important settings. Claims And Evidence: All theoretical results are accompanied by proofs in the appendix. Methods And Evaluation Criteria: The methods and evaluation criteria make sense for the problem. Theoretical Claims: I looked through the proofs in Appendix A, but did not examine them in great detail. I did not notice any issues. Experimental Designs Or Analyses: I examined each of the experiments the authors present and did not notice any major issues. However, there are a few features of the figures that the authors might further analyze. For example, we see a strong effect for the Laplace Mechanism in Figure 2 where the variance ratio increases monotonically for epsilon < 1 and then decreases monotonically for epsilon > 1. A similar effect is observed in Figure 6. What is happening here? Supplementary Material: I looked through the proofs in Appendix A and examined the additional figure in Appendix B. Relation To Broader Scientific Literature: The key contribution of the paper is optimal survey design for private estimation, which (as far as I am aware) has not been previously explored in the DP literature. Essential References Not Discussed: I am not aware of any essential references not discussed. Other Strengths And Weaknesses: The authors identified a gap in the DP literature, successfully derived a solution, and provided a thoughtful experimental evaluation. Other Comments Or Suggestions: 1. The figures are difficult to read in their current form; I ask the authors to please update the text size in the figures to match the remainder of the document. 2. I had trouble understanding Table 1 in Section 1 given what had been introduced so far at that point in the work. It wasn't until after reading to Section 5 that I was able to go back and understand the point the authors were making. I would suggest the authors either add more context initially or move Table 1 to later in the document. Questions For Authors: I have no additional questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are thankful that you acknowledged that we identified a gap in the DP literature regarding survey sampling and provided a successful solution along with a thoughtful experimental evaluation. We hope that our responses below will address your comments. - **(Experimental Design Comment)** Thank you for the question. Please recall that the DP variance objective consists of two components: data variance and DP randomness. When analyzing the variance objective under the Laplace mechanism, we found that both components are strongly convex, though proving the strong convexity of the DP-induced variance required significantly more effort. Initially, we assumed that the Discrete Laplace and TuLap mechanisms would exhibit similar behavior due to their comparable shapes. However, we later discovered that for these two mechanisms, strong convexity arises solely from the data variance, while the variance due to the Discrete Laplace mechanism itself is merely convex (specifically, linear). This key difference explains why the Laplace case differs from the others. Intuitively, when both sources of randomness exhibit strong convexity, they each lead to their own optimal designs (Neyman allocation for data variance and proportional allocation for purely Laplace variance). The competition between these two effects determines the optimal design. However, this dynamic does not hold for the Discrete Laplace and TuLap mechanisms, as their variance objectives are not strongly convex by themselves. We will add these insights to the simulation section. - We will update the text size in the figures to match the font size in the main content to improve readability. - To clarify, the table compares the non-private solution (optimal design) evaluated with the DP variance against the DP-optimal solution evaluated with the DP variance, and the values are the ratio of these variances (higher is worse). We will provide additional context for Table 1 to ensure it can be understood by the reader in the introduction section.
Summary: This submission is about designing stratified sampling schemes for surveys conducted with differential privacy. In stratified sampling, groups may be surveyed at different rates and these per-group estimates then combined. This is a ubiquitous survey method. The survey setting is an important one for statistical privacy, but there is not much work on it. The starting point of this paper is that is may be better to design the survey with privacy in mind, rather than independently combining "the best nonprivate survey" with "the best privacy mechanism." Notably, the choice of stratification interacts with the privacy guarantees from subsampling. For a class of surveys and privacy mechanisms, this paper shows how the task of finding the lowest-variance private survey design can be solved optimally and efficiently. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not check proofs carefully; the claims seem to make sense. Experimental Designs Or Analyses: The simulations seem appropriate. Supplementary Material: No. Relation To Broader Scientific Literature: The submission grounds itself well in the existing literature and clearly identifies the gap it fills. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: The submission is well-written and well-constructed. I found it very easy to follow. The analysis here is limited: we only consider a few noise distributions and a certain family of variance objectives. The approach assumes exact access to quantities which will not usually be known (e.g., the true per-stratum variances). This is not a fully-formed solution, but I regard it as a large step toward it. Other Comments Or Suggestions: none. Questions For Authors: none. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for recognizing the contributions of our work. While we acknowledge that this research does not fully resolve all related problems, we are pleased that you see it as a significant step forward. Please let us know if you have any further questions. --- Rebuttal Comment 1.1: Comment: I have no questions at this time.
Summary: This paper develops a stratified sampling scheme that minimizes variance while ensuring differential privacy (DP) under the Laplace, Discrete Laplace, and Truncated-Uniform-Laplace mechanisms. The key insight is that stratified sampling can amplify privacy guarantees, but optimal allocation of samples across strata must account for the effect of privacy noise. The authors formulate the problem as an optimization task, determining the optimal subsampling sizes to minimize variance while maintaining a fixed total sample size. They prove the strong convexity of the variance objective, derive closed-form continuous solutions for specific DP mechanisms, and propose an efficient algorithm for finding the optimal integer solution. The results demonstrate that ignoring DP effects can lead to significant variance inflation, and their method offers a principled way to balance privacy and accuracy in survey design. Claims And Evidence: See questions Methods And Evaluation Criteria: See questions Theoretical Claims: See questions Experimental Designs Or Analyses: See questions Supplementary Material: See questions Relation To Broader Scientific Literature: NA Essential References Not Discussed: No Other Strengths And Weaknesses: - Since the analysis is limited to three specific DP mechanisms, could the author discuss the extension to consider alternative frameworks such as Gaussian DP or Rényi DP? Would it benefit the utility or computation results? - For equation (1), do we need the constraint $n_ i\leq N_ i$? That is, do we allow sub-sampling with replacement? - This framework seems to assume that $\sigma_i$, the population variances, are known. Though in the discussion, the authors claimed that a pilot study is commonly conducted in practice, would the small portion of pilot samples further introduce a large variance in the objective function? How is it going to affect the accuracy of the solution? Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your questions. We are happy to address them below: - The key aspect of our setup that enables an efficient solution is the strong convexity property. When generalizing to other settings, it is important to verify that the resulting objective is still strongly convex, which may need to be done in a case-by-case manner; without strong convexity, our efficient algorithm is not applicable. Furthermore, we use the subsampling result of $\epsilon$-DP, which also exists in Rényi-DP and $f$-DP, but not in Gaussian-DP or zero-concentration DP. Regarding utility, the use of Gaussian noise could improve the utility of the final estimator as it has lighter tails than Laplace noise; however, the variances of both scale in the same manner as the privacy parameter is varied. Ultimately, these extensions are promising directions for future work, and we will include a paragraph in the discussion that includes these points. - You are correct that we do impose the constraint $n_i \leq N_i$, and we only consider subsampling without replacement. A possible extension of this work could also consider subsampling with replacement: for results on privacy amplification through subsampling with replacement, we refer to *"Privacy Amplification by Subsampling: Tight Analyses via Couplings and Divergences"* by B. Balle et al. As noted earlier, the key aspect to investigate in this extension is whether their variance objective is strongly convex, but this is left for future work. This extension will also be mentioned in the discussion section of the paper. - Yes, we agree and recognize this limitation of known variances. In practice, and as is widely done in survey sampling, the variances will be replaced by estimated variances or known bounds on the unknown variances. This is not ideal, but as other reviewers have also noted, this work begins the treatment of what appears to be a fundamental problem, and so assuming that variances are known seems like a reasonable step. Future work should address the issue of estimating variances, and especially the effect of any heavy tails that may result. Regarding accuracy, even if the variances are misspecified, the mean estimation remains unbiased, though the design may no longer be optimal. We will conduct a sensitivity analysis through simulations to compare the design performance under mild variance misspecifications of approximately 10-20%. Please let us know if our responses have addressed your questions or if you have any remaining concerns about the paper. We would appreciate any further clarification regarding your reasoning for the inclination towards rejection. We are particularly keen to understand your perspective, as other reviewers have acknowledged the novelty and potential impact of our work. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal and clarifications. I have updated the overall rating to 3. However, I still have a question regarding the constraint related to "subsampling without replacement." From my understanding, the constraint is currently defined as $\sum n_i = \eta, \ n_i \in \mathbb{N}$ in all relevant parts of the paper, including Equation (1) (Lines 329 left - 165 right), the definition of $D$ (Line 324), and Equation (9). However, it seems that the condition for "subsampling without replacement" is not explicitly included in these formulations. Specifically, from my understanding, the condition $n_i \leq N_i$ is a necessary condition for "subsampling without replacement." As such, the feasible set should be a subset of $\{n_i \in \mathbb{N} \ | \ \sum n_i = \eta, \ n_i \leq N_i\}$. Could the authors clarify how the "without replacement" condition is formulated in your setting? --- Reply to Comment 1.1.1: Comment: Thank you for raising your rating and for your question about including the constraint $n_i \leq N_i$. This was indeed a typo, and we have corrected it in the revised version. To be clear, all our calculations did include this constraint, even though our exposition omitted it in error. As our work builds on the subsampling results from *Gaussian Differential Privacy* (Dong et al., 2022) and Ullman’s notes (2017), where $q_i = \frac{n_i}{N_i} \leq 1$, all our results are derived under the constraints of this feasible set. Please let us know if we have addressed all of your comments, or if there are any other questions or concerns that we can address.
null
null
null
null
null
null
MAS-GPT: Training LLMs to Build LLM-based Multi-Agent Systems
Accept (poster)
Summary: This paper introduces MAS-GPT, a novel multi-agent system generation framework. Specifically, MAS-GPT employs a multi-agent generator model trained using modules such as MAS filtering, inter-consistency assurance, and intra-consistency enhancement. Designed to adapt to the nuances of diverse tasks, MAS-GPT aims to generate optimally performing MAS for tasks of the same category, thereby enhancing the capability of LLM-MAS to address a broad spectrum of problem types. The effectiveness of MAS-GPT is rigorously validated across a wide array of benchmarks, and the paper further presents comprehensive experimental results that substantiate its capabilities. Claims And Evidence: The claims in this paper are largely consistent with the experimental results and avoid overclaiming. Methods And Evaluation Criteria: The method proposed in this paper, or more precisely, the overall pipeline for training MAS generator, effectively addresses the relevant problems and offers significant reference value. Theoretical Claims: n/a Experimental Designs Or Analyses: The experimental design in this paper is largely sound and effectively demonstrates the capabilities of MAS-GPT. However, several points require further clarification and consideration: Lack of Single Agent Framework Specification: The paper does not specify the framework employed for the single-agent baseline models. Cost Comparisons and Metric Justification: I have reservations regarding the cost comparison metric used in this experiment. To the best of my knowledge, standard practice in evaluating LLM cost involves assessing token cost and time cost to quantify both financial and temporal overheads. Therefore, I recommend that the authors augment this section by providing token cost and time cost data for MAS-GPT and all baselines across the benchmarks used. Furthermore, the methodology for performance measurement in this cost comparison section, as well as the source of the experimental data, are not clearly delineated. Please provide this essential supplementary information. Scaling Effects of Data Size and Supervised Fine-tuning: The "Scaling effects of data size" analysis appears inconsistent with established supervised fine-tuning methodologies. It is widely understood that fine-tuning a 32B parameter LLM with only 100 data samples is generally insufficient to achieve meaningful adaptation. Consequently, the experimental results presented in Figure 5(a) may not adequately support the conclusions drawn in this section. Supplementary Material: Yes. Figure 7 about MAS pool. Relation To Broader Scientific Literature: This paper is insightful for building LLM-based multi-agent systems. Essential References Not Discussed: No. Other Strengths And Weaknesses: See Experimental Designs Or Analyses. Other Comments Or Suggestions: No. Questions For Authors: See Experimental Designs Or Analyses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your time devoted to reviewing our paper. We are glad to see that you acknowledge that our method is novel, experimental designs are sound, and our paper is insightful. It is encouraging. We would like to address your remaining concerns in the following. --- &nbsp; **Experimental Designs Or Analyses 1:** Lack of Single Agent Framework Specification: The paper does not specify the framework employed for the single-agent baseline models. **Answer:** Sorry for the missing details. We employ vLLM library to deploy LLMs. For all baseline methods, we run their official open-sourced codes. --- &nbsp; **Experimental Designs Or Analyses 2:** Cost Comparisons and Metric Justification. I have reservations regarding the cost comparison metric used in this experiment. To the best of my knowledge, standard practice in evaluating LLM cost involves assessing token cost and time cost to quantify both financial and temporal overheads. Therefore, I recommend that the authors augment this section by providing token cost and time cost data for MAS-GPT and all baselines across the benchmarks used. Furthermore, the methodology for performance measurement in this cost comparison section, as well as the source of the experimental data, are not clearly delineated. Please provide this essential supplementary information. **Answer:** Thanks for the recommendation! Following your advice, we have reported the token consumptions in addition to the number of LLM calls in the following table. In our initial experiments, we did not record token consumptions, so we need to conduct additional experiments during rebuttal. Due to limited time, we report the following results . We will report more in our revision. From the table, we see that our MAS-GPT achieves the best performance with the least inference costs (both call times and token consumptions). The performance measurement uses accuracy as metric, which is averaged on all benchmarks in Table 2. And the specific numbers are taken from Table 2. We will include all of these in our revision. | | AgentVerse | DyLAN | MAS-GPT | | ------------ | ------------- | ------------- | ----------------------------- | | LLM Calls | 12.05 (70B) | 12.96 (70B) | 1 (32B) + 6.44 (70B) | | Tokens | 8610.07 (70B) | 4874.22 (70B) | 1133.18 (32B) + 2126.98 (70B) | | Accuracy (%) | 59.36 | 60.54 | 64.47 | --- &nbsp; **Experimental Designs Or Analyses 3:** Scaling Effects of Data Size and Supervised Fine-tuning: The "Scaling effects of data size" analysis appears inconsistent with established supervised fine-tuning methodologies. It is widely understood that fine-tuning a 32B parameter LLM with only 100 data samples is generally insufficient to achieve meaningful adaptation. Consequently, the experimental results presented in Figure 5(a) may not adequately support the conclusions drawn in this section. **Answer:** Thanks for this valuable comments. We would like to answer from two perspectives. (1) The base model here is a 32B-Instruct model, which has coding capability at some levels. Therefore, training on 32B-Instruct models to learn to generate MAS code may have difference compared to training 32B-pretrained models to learn instruction-following capabilities. (2) Despite that Figure 5 (a) shows that using 100 training samples could significantly reduce the error rate of execution, it does NOT indicate that using 100 samples is sufficient. Please note that in Figure 5 (b), using 100 training samples results in significantly lower perfomance compared to using 10000 samples. That is, using 100 samples may successfully teach the LLM to generate executable MAS code, but fails to teach it to generate appropriate MAS, which is essential in ensuring good performance. We truly believe in this new paradigm. With more diverse and high-performance MAS being included, we believe that MAS-GPT will be further advanced in a way similar to the advancement of ChatGPT: with better data and training techniques, the models become better. --- &nbsp; Overall, we hope that our responses can fully address your concerns and will be grateful for any feedback.
Summary: This paper proposes to train an LLM to build multi-agent system. The paper reframes MAS construction as a python coding task and represents MAS as executable python code. One key contribution is a consistency-oriented data construction pipeline that generates high-quality query-MAS pairs. Extensive experiments on different benchmarks and backbones indicate that MAS-GPT outperforms baselines in effectiveness, efficiency and generalization capability. MAS-GPT also excels in terms of inference costs. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A. No theoretical claims involved. Experimental Designs Or Analyses: Yes. Supplementary Material: No supplementary provided. Relation To Broader Scientific Literature: The paper builds on and extends research in automatic and adaptive construction of agentic systems. In contrast to previous works that require manual design or multi-round refinement, this paper proposes training an LLM to generate MAS in one inference, thereby reducing cost and improving efficiency. This direction is aligned with a broader trend toward developing more streamlined, scalable, and adaptive AI architectures that simplify complex system design through end-to-end learning. Essential References Not Discussed: Works that involve optimizing MAS architecture: [1] Cut the Crap: An Economical Communication Pipeline for LLM-based Multi-Agent Systems. ICLR 2024 Other Strengths And Weaknesses: Strengths: 1. The paper presents a novel idea by training an LLM to build multi-agent systems, thereby reducing both inference cost and the manual effort involved in system design. The motivation is clearly articulated, positioning the work as a natural evolution in the use of multi-agent orchestration. 2. The authors provide a comprehensive description of their consistency-oriented data construction pipeline. The detailed rational behind the strategies enhances the credibility of the proposed pipeline. 3. The presentation quality is good. The paper writeup is clear and easy to follow. Weaknesses: 1. Representing multi-agent system as python code neglects important aspects of agent functionality, such as tool integration and multi-turn interactions. In practical agent systems, agents interact with their environments by executing tools and processing iterative feedback. However, in the scope of this research, the agent seems to be prompting LLM in a single-turn manner. Therefore, the approach somewhat resembles automated workflow orchestration or bootstrapped reasoning. This oversimplification is my main concern of this paper. 2. The paper should add discussion and compare the performance and cost with AgentPrune[1], a pipeline for multi-agent communication. 3. The method shares similarities with ADAS in its representation of MAS as code. The paper would benefit from a direct performance and cost comparison with ADAS, similar to the comparison provided with AFlow. 4. The performance gain seems modest. In table 2, when compared to single LLM, MAS-GPT shows modest performance gains on most benchmarks except MATH and GSM-H. In table 3, similar observation is there for qwen2.5b model. This raises questions about the practical benefits of integrating multi-agent structures, especially given the additional complexity involved. [1] Cut the Crap: An Economical Communication Pipeline for LLM-based Multi-Agent Systems. ICLR 2024 Other Comments Or Suggestions: N/A Questions For Authors: 1. For the generated MAS, have you observed any interesting workflows beyond the typical sequential prompting and answer ensembling? For example, are there cases where the MAS incorporates if-else structures, cross-references answers from different agents, or employs other non-linear strategies in its workflow? 2. In Figure 7(l), you include a "code test agent." Could you explain how this agent works in practice? Does it verify the generated code by actually executing it with test cases, or does it simply deduct whether the output aligns with the expected test case in natural language? 3. The data curation process involves multiple selection and refinement steps to construct the final query-MAS pair dataset. Could you provide details on the number of query-MAS pairs retained after each selection step (e.g., after initial pairing, after inter-consistency selection, and following intra-consistency refinement)? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We are glad that you acknowledge that our idea is novel, method is comprehensive, and direction is aligned with a broader trend toward developing more streamlined, scalable, and adaptive architectures. Let's address your remaining concerns! --- **W1:** Representing MAS as python code neglects important aspects of agent functionality...? **A:** Sorry for the missing details! Our MAS-GPT actually **supports** functionalities mentioned by you (tool and multi-turn interactions) and they are indeed compatible with code. (1) Code execution tool is implemented as a python function `execute_code(code_text)`, which outputs the execution results to other agents. More tools can be implemented similarly. (2) While single-turn LLM call is represented by a function `call_llm(prompt)`, multi-turn interaction is represented by `multi_turn(history, prompt)`. Representing MAS as python code is a promising direction, as most functionalities in our AI community can be represented by python code. Along this new direction, we believe that with more data, tools, MAS, MAS-GPT could be continuingly improved as that from GPT-3.5 to GPT-4. --- **W2:** Discussion on AgentPrune. **A:** Thanks for recommendation! We will include these discussions and cite it. (Methodology) The goals of our paper and AgentPrune are significantly different. We aim to optimize MAS-GPT **once** on diverse domains such that the MAS-GPT can **generalize** to diverse domains. While AgentPrune aims to optimize on one validation dataset such that it can work on the corresponding test dataset. There are also two recent papers with goals similar to AgentPrune: GPTSwarm(ICML2024 oral), AFlow(ICLR2025 oral). While these three require **re-optimizing** for each test set, MAS-GPT can generalize to diverse test sets **without re-optimizing**. Since in practice, users will not provide several examples in advance for optimization, we believe that MAS-GPT is a promising direction to truly make MAS practically applicable. (Experiments) We run the official code of AgentPrune on HumanEval and MMLU. From the table, we see that our method performs **much better with less inference cost.** ||Method|Acc|Re-optimizing cost|Test cost (avg)| |-|-|-|-|-| |HEval|AgentPrune|70.12|349(70B)|9.8(70B)| |HEval|MAS-GPT|80.25|0|1(32B)+8.7(70B)| |MMLU|AgentPrune|75.05|280|7.3(70B)| |MMLU|MAS-GPT|78.38|0|1(32B)+4.7(70B)| --- **W3:** Comparison with ADAS. **A:** Thanks for the advice. (1) The reason why we compare with AFlow rather than ADAS is that AFlow is an improved follow-up of ADAS. (2) We compare MAS-GPT with ADAS (optimized on GPQA). MAS-GPT performs **significantly better than ADAS on these diverse datasets at a much lower cost**, indicating that MAS-GPT is much more generalizable. ||MATH|GSM8K|GSM-H|H-Eval|H-Eval+|MMLU|GPQA|SciBench|Avg|LLM calls| |-|-|-|-|-|-|-|-|-|-|-| |ADAS|34.7|36.3|12.4|75.9|69.6|76.6|38.1|14.5|44.8|21~41 (70B)| |MAS-GPT|68.7|93.4|62.4|80.3|78.9|78.4|37.6|24.2|65.5|1 (32B) + 6.4 (70B)| --- **W4:** The performance gain seems modest? **A:** Thanks for the comments. We would like to respond from the following perspectives. (1) MAS-GPT is the **only method** that consistently achieves better performance than SINGLE. Here, we report the percentage of benchmarks where a method outperforms a single agent. ||CoT|SC|AgentVerse|DyLAN|MAS-GPT| |-|-|-|-|-|-| |Percentage|50|62.5|37.5|50|100| (2) On challenging datasets such as AIME-2024 and GSM-H, our method achieves significantly better performance than SINGLE (16.67 on AIME-2024), demonstrating substantial potential. (3) We agree that for some benchmarks, improvements are not significant. Please note that these patterns commonly exist in all methods. The reasons why we still report them are twofold. First, these benchmarks are commonly used in MAS literature and we follow their setups. Second, we want to show that our method is generalizable on diverse datasets. --- **Q1:** Are there cases where the MAS incorporates if-else structures...? **A:** Yes, we have observed interesting workflows. For example, we see MAS with if-else structures in several cases. (1) Agents evaluate the current status and determine whether to stop or continue solving. (2) If code executor shows that the code runs correctly, then the system ends; else the system continues. --- **Q2:** "code test agent"? **A:** Yes. We verify the generated code by actually executing it. We implement two related python functions: one takes code piece as input and one takes code piece together with test cases as inputs. --- **Q3:** Number of query-MAS pairs? **A:** Thanks for the advice. The selection process will filter out those queries that all MAS fail to correctly answer. The refinement process essentially replace those original query-MAS pairs(Line 256-258). |Step|Init|Select|Refine| |-|-|-|-| |Num|12292|11442|11442| --- Overall, we hope that our responses can fully address your concerns and will be grateful for any feedback. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response, but I still have significant reservations regarding several aspects: 1. Regarding Weakness 1. My point is that the examples provided in the current paper do not fully capture the complexity inherent in realistic multi-agent systems. I acknowledge your clarification that the representation of MAS as Python code can theoretically support tools and multi-turn interactions. However, the current implementation, as demonstrated in the appendix, only uses the function `self.llm.call_llm(instruction)`, which indicates a single-turn LLM interaction. This does not incorporate the `multi_turn` function or other advanced capabilities mentioned in your response. Therefore, my original concern remains: the MAS instances generated and trained by MAS-GPT appear overly simplistic, essentially serving as ensembles or variants of multi-agent debate, rather than sophisticated, interactive multi-agent systems. 2. Regarding Performance. I acknowledge your argument that MAS-GPT demonstrates consistent performance improvements across benchmarks. However, it's important to note that the AIME dataset contains only 30 tasks. Therefore, a seemingly substantial improvement of 16% means only 5 additional correct answers. This absolute gain limits the practical significance of the improvement, especially given the complexity introduced by the multi-agent structure. 3. I am not fully convinced about your explanation regarding the "code test agent". How exactly does your current implementation incorporate an agent with code execution capabilities if your agent is called by `self.llm.call_llm()`? Additional details on how these specialized agents are integrated into your code is needed. 4. Regarding your `utils` file. I also have questions about the `utils` module imported in your code examples. It is not clear what auxiliary tools and functions are included in `utils`. Additionally, since the generated code consistently begins with `from utils import *`, can you clarify how MAS-GPT learns about the complete list of the utility functions? How do you ensure MAS-GPT can avoid generating or referencing utility functions that do not exist? --- Reply to Comment 1.1.1: Comment: Thanks for the reply. We noticed that most of your reservations are caused by insufficient details about implementation. Please allow us to provide all details you are interested. --- **Q1:** The current implementation, as demonstrated in the appendix, only uses the function self.llm.call_llm(instruction)? **A1:** We kindly remind the reviewer that the current implementation **already supports** the tools mentioned is our reponse to **W1** (i.e., execute_code, call_llm, multi_turn). Please refer to evidence in **Line 841-846** in appendix. In this example, agent first generates the code by calling `generate_and_extract_code` and then executes the code by calling `execute_code`. These two functions are implemented in the `from utils import *` (details provided in response to *Q3&Q4*). We are sorry that we did not emphasize this detail and will make this clear in our revision. We will open source all codes, data, and models. &nbsp; --- **Q2:** Regarding Performance. **A2:** (1) We agree that AIME has limited sample amount. During rebuttal, we experimented on GAIA dataset, where we see that MAS-GPT outperforms singe agent with a significant margin. (2) Meanwhile, we believe it is crucial to view this from a comparative perspective. We conduct a thorough comparison across many baselines (10 + 2 in rebuttal), all of which were implemented using official code. Please note that GPTSwarm (ICML 2024 oral) compares with 4 baselines while AFlow compares with 7 baselines. From this comparative standpoint, our method outperforms existing approaches, demonstrating its effectiveness. While we could have chosen to highlight only the benchmarks with larger improvements, we intentionally included a broader range of results to provide the community with a more comprehensive view. (Qwen2.5-72B-Instruct, samples without additional files) ||L1|L2| |-|-|-| |Single|16.67|9.23| |MAS-GPT|23.81|21.54| &nbsp; --- **Q3:** “Code agent”implementation? **A3:** Sorry for that our verbal descriptions did not provide you a clear understanding. Please allow us to straightforwardly show you the implementation. ``` def execute_code(code): if not code: return "Empty code. No output." temp_dir = tempfile.mkdtemp() output_dict = {"output": None, "stdout": None, "error": None} def run_code(): try: global_vars = {} local_vars = {} # Write the code to a temporary file with open(os.path.join(temp_dir, "script.py"), "w", encoding="utf-8") as f: f.write(code) # Capture standard output stdout_capture = io.StringIO() with contextlib.redirect_stdout(stdout_capture): exec(code, global_vars, local_vars) # Execute code output_dict["stdout"] = stdout_capture.getvalue().strip() # Capture print() output output_dict["output"] = local_vars.get("output", "None") # 'output' variable except Exception as e: output_dict["error"] = traceback.format_exc() if output_dict["error"]: return f"Error:\n{output_dict['error']}" return f"Final output: {output_dict['output']}\nPrint during execution:\n{output_dict['stdout']}" ``` &nbsp; --- **Q4:** utils file. **A4:** Sorry for the confusion. Totally, we implement the following class and functions in `utils`, which are described in MAS-GPT's system prompt : ``` - `LLM(model_list)`: a class that represents an LLM with the given model list, with two available functions: call_llm(self, prompt) and multi_turn(self, history, prompt). - `execute_code(code)`: a function that executes the given code and returns the output. - `test_code_get_feedback(code, test_cases)`: a function that tests the given code with the test cases and returns the feedback. - `get_function_signature(llm, taskInfo)`: a function that returns the generated function signature for the given task. - `get_test_cases(llm, taskInfo, function_signature)`: a function that returns the generated test cases for the given task and function signature. - `extract_code_solution(solution)`: a function that returns the code by extracting (wrapped within <Code Solution> and </Code Solution>) from the given solution. - `generate_and_extract_code(llm, prompt, temperature=None)`: a function that returns the generated response and the extracted code from the response. ``` Due to limited space, please refer to the example of implementing `execute_code` in **A3**. Since the training data include many examples of using functions that are provided in system prompt and examples where some new functions are implemented in the code representing MAS (see example in Line 732-757), MAS-GPT can learn to use available functions in the system prompts and also self-implement python functions for usage. --- Thanks for mentioning these. We will open source all codes, data, and models. Looking forward to your feedback and re-evaluation!
Summary: The paper introduces MAS-GPT, a novel approach that trains LLMs to automatically generate query-specific multi-agent systems (MAS) in a single inference step. Unlike previous methods requiring manual configuration or multiple LLM inferences, MAS-GPT simplifies MAS creation by reframing it as a generative language task, where the MAS is output as executable Python code tailored to user queries. The authors propose a consistency-oriented dataset construction pipeline to produce high-quality training data, enabling MAS-GPT to effectively learn to build adaptive MAS. Experiments on nine diverse benchmarks show that MAS-GPT consistently outperforms existing multi-agent methods, achieving better adaptability, generalization, and significantly lower computational costs. Claims And Evidence: The primary claim of this paper is that MAS-GPT significantly improves adaptability, generalization, and computational efficiency compared to existing methods. To substantiate this claim, the authors conducted experiments across nine datasets. However, I find that the experimental setting presented in this paper is toy and not real, as the training and testing data are derived from the same domain. If the authors wish to convincingly demonstrate MAS-GPT’s generalization capability, they should conduct additional experiments on out-of-domain datasets, such as GAIA. Additionally, the authors argue that MAS-GPT offers improved computational efficiency over manually designed multi-agent systems. Nevertheless, their evaluation does not account for the additional computational costs incurred by model training and dataset refinement, both of which likely require significant time investment. In the authors setting, the users need to train "MAS-GPT" every time for each domain separately. Methods And Evaluation Criteria: The authors use the metrics from the benchmarks they use in experimental section. The metrics are good. Theoretical Claims: There is no theoretical claims involved in the paper. Experimental Designs Or Analyses: I checked all experiment results and settings. The critical issue is what I mentioned in previous part. The training and test data are from the same domain. I greatly suspect the usability of this method. Supplementary Material: Yes. I checked all supplementary materials. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** 1. The paper is well-written, I could easy follow the reasoning logic. Overall, it is easy to understand. 2. I agree the motivation of this paper. Building MAS manually is not practical. Thinking of the application/query grows, the human effort needed will be large. **Weakness** 1. The experimental setup appears simplistic and unreliable. The authors trained MAS-GPT and evaluated it within the same domain, suggesting that users would need to retrain MAS-GPT every time they apply it to a new domain. When considering the training time required for MAS-GPT, I doubt it would actually be more efficient than manual MAS design. 2. In Table 2, the performance difference between "Single" and proposed method is not obvious. Most the performance improvement seems coming from the GSM-H dataset. Improvements on other datasets are small and seems randomness. Multiple evaluations runss needed to be conducted. 3. The authors spent considerable effort manually refining MAS within their training dataset. As a result, I suspect that the observed performance improvements largely originate from the manually designed MAS, and that MAS-GPT essentially memorizes these manually provided MAS inputs. Other Comments Or Suggestions: The authors should further elaborate in their paper—particularly through experiments or additional analysis—why MAS-GPT is more efficient compared to manual MAS design. Questions For Authors: 1. In Table 2, how many experimental runs did the authors perform? Could the authors explain why the differences across datasets are relatively small? 2. Why did the authors train and test MAS-GPT within the same domain? When claiming that MAS-GPT is efficient, did the authors account for the development and training time of MAS-GPT itself? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We are glad to see that you agree with our motivation and that our approach is novel. We notice that your main concern is on the experimental setups, of which we believe is caused by some confusion. (Since the comments in `Claims` are overlapped with `Weaknesses`, so we focus on the latter) --- &nbsp; **W1:** The authors trained MAS-GPT and evaluated it within the same domain? **Answer:** Sorry for the confusion. (1) We would like to highlight that we do **NOT** 'train MAS-GPT and evaluate it within the same domain'. In contrast, we train MAS-GPT on diverse domains simultaneously and evaluate them on diverse domains **without re-training**. For example, during evaluation, GPQA (graduate-level QA), SciBench (college-level scientific problems), AIME-2024 (mathematical competition) are all **out-of-domain benchmarks and much harder** than the training data. (2) Following your advice, we report performance results in the medical domain, MedQA (there is no medical dataset in training). From the table, we see that MAS-GPT is indeed generalizable. ||AgentVerse|DyLAN|MAS-GPT| |-|-|-|-| |MedQA|65.84|76.34|78.60| (3) Compared to many optimization-based methods, MAS-GPT is indeed more generalizable. GPTSwarm (ICML 2024 oral), AFlow (ICLR 2025 oral), and AgentPrune (ICLR 2025) are all benchmark-dependent methods. That is, to test their methods on one benchmark, they need to firstly optimize on a subset (with GT labels) of that benchmark. In contrast, our MAS-GPT is optimized on diverse training data and then **generalize to many benchmarks without modification**. --- &nbsp; **W2:** The performance difference between "Single" and proposed method is not obvious? **Answer:** Sorry for the missing details. We run twice. We would like to emphasize from three perspectives. (1) MAS-GPT is the only method that consistently achieves better performance than SINGLE. Please refer to the following table, where we report the percentage of benchmarks where a method outperforms a single agent. |Method|CoT|SC|AgentVerse|DyLAN|MAS-GPT| |-|-|-|-|-|-| |Percentage|50|62.5|37.5|50|100| (2) On particularly challenging datasets such as AIME-2024 and GSM-H, our method achieves significantly better performance than SINGLE (e.g., 16.67 on AIME-2024), demonstrating substantial potential. (3) We agree that for some benchmarks, improvements are not significant. Please note that these patterns commonly exist in all methods. The reasons why we still report them are twofold. First, these benchmarks are commonly used in MAS literature and we follow their setups. Second, we want to show that our method is generalizable on diverse datasets. --- &nbsp; **W3:** Improvements originate from manually designed MAS? **Answer:** Thanks for the comments. We would like to address your concerns from two perspectives. (1) Following your advice, we compute the number of unique MAS during test time (GSM-Hard, MATH, and SciBench) compared to those in training data. This strongly verify that MAS-GPT does **NOT** essentially memorize the training data. ||GSM-Hard|MATH|SciBench| |-|-|-|-| |Unique/total|925/1000|690/1000|491/692| (2) It is NOT a bad thing if MAS-GPT sometimes generates MAS that exists in the training data. The key of training MAS-GPT is making it learn to generate appropriate MAS based on each specific query. The keyword here is `appropriate` but not `new`. This is actually similar to the training of LLMs, LLMs could generate identical sentences seen during training and also new sentences as long as they are appropriate. --- &nbsp; **Q1:** How many runs? **Answer:** We performed 2 runs. See W2. --- &nbsp; **Q2:** Why did the authors train and test MAS-GPT within the same domain? When claiming that MAS-GPT is efficient, did the authors account for the development and training time of MAS-GPT itself? **Answer:** We did NOT train and test MAS-GPT within the same domain. Please refer to responses in Weakness 1. Please allow us to emphasize our focus on this paper. Our ultimate goal is to make MAS-itegrated application effective and efficient during inference. To achieve this, we consider a brand-new paradigm: training an LLM. This LLM (i.e., MAS-GPT) is trained on diverse data and can generalize to diverse scenarios, making it a step closer to this goal. Training is **one-time while inference could be endless** (just like OpenAI trains GPT-4 for one time and serves the world countless times). Training-time efficiency is not the focus of this paper, which could be a potential future direction. As an analogy, in efficient LLM research, their goal is to ensure the trained model enables LLMs to be efficient in applications (inference), rather than focusing on training efficiency. Our work shares a similar objective. --- &nbsp; Overall, we hope that our responses can fully address your concerns and will be grateful for any feedback. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My concerns are still there: [1] W1: As I previously comment, the majority of the performance gains appear to come from the Math domain. This largely due to the training set includes Mathmatic problems. Similarly, SciQ (training) and SciBench (testing) are from the same domain. Regarding MedQA, It's not truly out-of-domain. You include large general QA problems in the training set. As I suggested before, GAIA is one example that could truly represents an out-of-domain setting. [2] You state: "It is NOT a bad thing if MAS-GPT sometimes generates MAS that exists in the training data." — **there is no doubt it is a bad thing**. Your motivation is to eliminate the need for manual MAS design, yet this implies that humans must still manually curate MAS examples for training (even more cost). That seems to contradict your original motivation. From the table you listed, the number of unique MAS greatly support my assumptions (except GSM-Hard), MAS-GPT memory the MAS in the training set. Additionally, why not show these information on all test data but just cherrypick three? [3] it is difficult for me to accept the explanation that you **had already** run the experiments twice but **forgot** to report the details in the paper. [4] The clarification regarding the performance comparison with the single-agent baseline makes sense to me. I was going to lower your score because you avoided some facts, but considering that you clearly explained your advantages over single agents, I will maintain the original score. --- Reply to Comment 1.1.1: Comment: Thanks for the reply. There are some misunderstandings and we would like to further address your concerns. &nbsp; ## Contexts Firstly, please allow us to introduce the progress of the MAS research to ensure that our contexts are well aligned. There are broadly three types of MAS works: - **Type 1:** Manual design. Manually designed for some specific tasks (such as coding): MetaGPT [1], ChatDev [2]. - **Type 2:** Test-time optimization. Rely on LLMs with multiple LLM calls to optimize the MAS and then solve the query: DyLAN [3]. - **Type 3:** Validation-set-required optimization. Optimize the MAS on a subset from test dataset (e.g., MATH). Then, the MAS is tested on the corresponding test dataset (also MATH). That is, the training and testing datasets are exactly from the **same source**! Examples include GPTSwarm [4, ICML 2024 Oral], ADAS [5, ICLR 2025], AFlow [6, ICLR 2025 Oral], AgentPrune [7, ICLR 2025]. We have now compared with all of these methods! However, considering applying MAS in practice (e.g., serve the world like ChatGPT), they all fail: - **Type 1 -> Inadapativity:** The real-world user queries are diverse, where the fixed manually-designed MAS would fail. - **Type 2 -> Cost-inefficiency:** Optimizing the MAS for each query with many LLM inference is cost-intensive for wide application. - **Type 3 -> Lack of generalization:** In [4,5,6,7], every time they switch to another test dataset, their MAS need to be **re-optimized** (cost hundreds or thounds LLM calls). However, user queries are diverse and there is no related examples available in advance, making them inapplicable. Addressing these, MAS-GPT offer three key advantages: - **Adaptivity.** MAS-GPT adaptively generates suitable query-specific MAS. - **Cost-efficiency.** Building MAS requires only ONE inference of a 32B-sized model rather than multiple calls of strong models like GPT-4o. - **Generalization.** After training only **ONCE**, MAS-GPT generalizes to many unseen domains and significantly more challenging tasks without retraining. &nbsp; ## Answers **C1:** As I previously… **A1:** (1) Though you thought that MAS-GPT is not generalizable enough, please note that **MAS-GPT is the most generalizable method currently!** Rather than optimizing on one subset of the same source as test set before testing [4,5,6,7] (i.e., generalize to only **1 same-source** test set for each optimization), MAS-GPT can generalize to **4 same-source test datasets and 5 different-source** test datasets for ONE optimization! This should not be overlooked. (2) Why we did not try GAIA: we misunderstood your point. We thought the concerns result from that we did not provide sufficient descriptions of test datasets. Meanwhile, we believe that compared to other works, our test datasets can better verify generalization (different sources, more difficult). (3) We are now working our best on GAIA, please give us some time! We will include these in the revision. Thanks for recommendation. (Qwen72b, samples without extra files, no tool provided) **MAS-GPT generalizes to GAIA!** ||L1|L2| |-|-|-| |Single|16.67|9.23| |SC|19.05|13.85| |MAS-GPT|23.81|21.54| --- **C2:** You state:.. **A2:** (1) This does NOT contradict our motivation and there are misunderstandings. Our key motivation is to **generate appropriate MAS given any query in practice** (serving like ChatGPT). Our claim is that manually-designed MAS fail to apply in such scenarios while MAS-GPT works by `standing on the shoulders of giants`! (an exciting property). Ideally, if we include all existing MAS methods (**either manually designed or optimized by LLMs**) into training, in practice, deploying one MAS-GPT can solve diverse user queries efficiently! (2) Why we report these three: in Table 2, MATH, GSM-H, and SciBench are three benchmarks where MAS-GPT has the most improvement. They best reflect that our gain does not stem from pure memorization. --- **C3:** it is difficult… **A3:** We are wrongfully accused. (1) We planned to run three times. However, due to time and budget constraints, we were only able to run twice. At the time, we did not consider it significant enough to emphasize. (2) We reported results on many benchmarks and baselines, which also make our results convincing. (3) We will open source all codes, data, and model. --- We sincerely hope that you will consider our article from the perspective of the current progress in the MAS research community and hope you could re-evaluate our paper. Thanks! [1] Metagpt: Meta programming for multi-agent collaborative framework, ICLR 2024 Oral [2] Chatdev: Communicative agents for software development, ACL 2024 [3] A dynamic LLM-powered agent network for task-oriented agent collaboration, COLM [4] Language Agents as Optimizable Graphs, ICML 2024 Oral [5] AUTOMATED DESIGN OF AGENTIC SYSTEMS, ICLR 2025 [6] AFLOW: AUTOMATING AGENTIC WORKFLOW GENERATION, ICLR 2025 Oral [7] CUT THE CRAP: AN ECONOMICAL COMMUNICATION ..., ICLR 2025
Summary: The paper presents MAS-GPT, a novel approach that automates the creation of multi-agent systems specifically tailored to user queries using a single inference. The authors address key limitations in existing MAS approaches, namely high manual crafting effort and high computational costs, and propose to simplify MAS construction as a generative language task. They introduce a dataset construction pipeline emphasizing consistency, which facilitates supervised fine-tuning of MAS-GPT. Extensive experiments demonstrate MAS-GPT’s superiority across various tasks, proving its effectiveness, efficiency, and adaptability. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: MAS-GPT builds upon and advances existing literature on multi-agent systems and large language models. It addresses critical limitations found in prior systems such as MetaGPT, ChatDev, and AgentVerse, notably manual configuration and high inference costs. By reframing MAS construction as a generative language task and utilizing SFT, this work bridges a significant gap in current MAS approaches and introduces a flexible, scalable alternative. Essential References Not Discussed: Yes Other Strengths And Weaknesses: **Strengths**: 1. The inter- and intra-consistency-oriented approach is robust, methodologically sound. 2. Thorough experiments with diverse benchmarks and backbones validate the generality and effectiveness of MAS-GPT. 3. The writing is easy to follow. **Weaknesses**: 1. The generalization capability of MAS-GPT across significantly different or novel task domains remains unclear. Although the authors designate benchmarks such as H-Eval and SciBench as out-of-domain, their training dataset explicitly includes MBPP (programming benchmark similar to H-Eval) and SciQ (science question-answering dataset similar to SciBench). Consequently, these tasks are not strictly out-of-domain. The LLM could memorize task and MAS patterns during sft and subsequently reproduce them during inference. 2. The generated MAS topologies presented are relatively straightforward. This simplicity raises concerns about MAS-GPT’s potential to effectively handle complex real-world tasks that require sophisticated interactions and multi-agent collaboration. Other Comments Or Suggestions: Refer to the Questions. Questions For Authors: Questions: 1. In Table 1, the number of MAS is 7580. What is your criteria for distinguishing two MAS instances? Specifically, if two MAS share an identical topology but differ in the instructions to the agent, are they counted as separate MAS instances, or are they considered identical? 2. How does MAS-GPT perform on tasks that are genuinely out-of-domain? For example, if MAS-GPT is exclusively trained on mathematics-related tasks (e.g., MATH and GSM8K), would the generated MAS structures remain effective for programming benchmarks such as HumanEval or MBPP? Could you discuss MAS-GPT’s transferability and performance in such scenarios? 3. What are the costs for generating the training data in terms of GPU hours and costs for calling APIs? 4. Have you examined the MAS topologies by representing them as Directed Acyclic Graphs? If so, have you identified any genuinely novel topological structures generated by MAS-GPT beyond those present in the training set? How do you verify that MAS-GPT is not simply memorizing training data topologies and applying them unchanged to tasks during inference? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We are glad to see that you acknowledge that our approach is novel, methodologically sound, flexible, and scalable. We are sorry that we leave you some concerns and let's clarify. --- **W1&Q2:** The generalization capability of MAS-GPT across significantly different or novel task domains remains unclear... **Answer 1:** Our MAS-GPT can generalize well to new tasks and we have put efforts to verify this point in our paper. Sorry for potentially missing details. (1) SciQ and SciBench are significantly different, where SciQ is a knowledge-based benchmark (multiple-choice, https://huggingface.co/datasets/allenai/sciq) while SciBench is a reasoning-based benchmark (non-choice, https://huggingface.co/datasets/xw27/scibench). Meanwhile, SciQ is a dataset for **4th grade** exams while SciBench are all **college-level** scientific problems, which are much challenging than those in training data. (2) Our paper also include results on GPQA (graduate-level QA) and AIME-2024 (mathematical competition), which are all out-of-domain and much harder than training data. Here, we additionally test on medical domain, MedQA (there is no medical dataset in training). From the table, we see that MAS-GPT is indeed generalizable. | |GPQA|AIME|MedQA| |-|-|-|-| |DyLAN|35.98|53.33|76.34| |MAS-GPT|37.62|66.67|78.60| (3) We kindly remind the reviewer that our MAS-GPT has achieved a better generalization capability compared to existing optimization-based methods: MAS-GPT does not require re-optimizing when being applied to different benchmarks. For methods such as GPTSwarm (ICML 2024 oral) and AFlow (ICLR 2025 oral), given a benchmark (e.g., MATH), these methods first optimize on a subset and then could only infer on the corresponding test set. --- &nbsp; **W2:** The generated MAS topologies presented are relatively straightforward... **A:** Thanks for the comments. (1) The reason why we develop the MAS topologies at the current complexity is straightforward: the current complexity level is somewhat sufficient to achieve a pleasant performance on most of the benchmarks. All benchmarks are commonly used by MAS community and we follow their setups. If we continue to increase the complexity, it might be hard to achieve a good cost-performnace balance. (2) Our approach to training MAS-GPT is scalable and methodologically capable of supporting sophisticated topologies. To achieve this, one only needs to design more sophisticated MAS and include them into the MAS pool. Based on this, the trained MAS-GPT would be able to generate sophisticated MAS for complex queries. Meanwhile, MAS-GPT already supports using tools such as code executor, indicating its potential of scalability to include more tools to handle complex tasks. (3) The main contribution of this paper is pointing out a new direction for the MAS community. Like the training of LLMs (e.g., GPT-4), there are always cases that the trained MAS-GPT cannot solve no matter how well the current MAS-GPT is trained. That is, the current MAS-GPT is not the end. Similar to the continuing advancement of LLMs, our MAS-GPT could be continuously improved as the community designing more sophisticated (or better) MAS, tasks, and data samples. --- **Q1:** If two MAS share an identical topology but differ in the instructions to the agent, are they counted as separate MAS instances? **Answer:** Sorry for the confusion. Yes, they are counted as separate MAS instances. The reasons for such criteria is that it is hard to automatically and merely distinguish topologies. --- **Q3:** What are the costs for generating the training data in terms of GPU hours and costs for calling APIs? **A:** Collecting the training data roughly requires 245k calls of open-source LLMs. The training process takes roughly 32 GPU hours for training 32B-sized models. The refinement process roughly costs 143 US dollars for calling GPT-4o. Please note that MAS-GPT (32B) is only required to be trained **once** and applied to handle diverse queries **without re-training** both MAS-GPT and the LLMs that drive the MAS. Meanwhile, we will open-source all of the data at every step (e.g., before and after filtering) and models to facilitate future research. --- **Q4:** Did you see new topologies? **Answer:** We have manually checked some of the generated topologies and found that MAS-GPT could generate novel topologies. Please refer to our case study of Case 3 (Section A.3). Meanwhile, please note that even if the generated topologies are the same as existing topologies, MAS-GPT would assign appropriate prompts (instructions) to the agents within the topologies, making them query-specific and appropriate MAS. For example, during the inference of MAS-GPT on GSM-Hard, 925 out of 1000 generated MAS are unseen from the training data, indicating that MAS-GPT is generating query-specific appropriate MAS. --- &nbsp; Overall, we hope that our responses can fully address your concerns and will be grateful for any feedback.
Summary: In this paper, the authors propose to train a GPT to generate the code that represents a Multiagent system, and thus provide a team build for each query. Specifically, the author proposes to use some existing datasets as training samples and run these samples on 40 different predefined systems to form training pairs for the model to learn the best candidates. The results on 8 datasets show that the proposed method is better than many fixed baselines, such as CoT and DyLAN. Claims And Evidence: The motivation of the proposed method is generally clear. The authors want to solve the issue of an adaptive system for each query and human labor, as well as the system cost. Methods And Evaluation Criteria: The motivation for choosing these 40 MAS pools is unclear. Why do you include these models? Does it mean the model can only support existing frameworks? Since the MAS is evolving, the proposed method is more like a model selection algorithm rather than an auto-team building algorithm. Although the system can generalized to unseen MAS systems, it is better to conduct an experiment to enlarge the pool with some random systems already. Theoretical Claims: N/A Experimental Designs Or Analyses: The cost the each sample is not fair. The authors compare the cost by computing the number of the inferences, however, the inference has diverse lengths, which is unfair for short response generators. It is better to include the real money or token cost of the system to have a direct understanding of the result. The experiment design is unfair. This is two-fold. First, the proposed methods use the training set of the evaluation dataset while the baselines do not use it. Thus, it is the comparison between fine-tuning and zero-shot learning. It is fairer to compare the performance of the cross-domain or zero-shot capability of the proposed method. Second, the baselines are largely included in the system. For instance, the CoT is inside the 40 MAS pool. Thus, as long as the model can figure out the performance of CoT, they can beat it naturally. It will be better not to include the baseline in the pool to see the filter-out performance, mimicking the real-world cases where we do not know what other models are. Supplementary Material: Did not see one. Relation To Broader Scientific Literature: Highly related to Auto team building or multiagent systems. Essential References Not Discussed: The discussion is relatively sufficient. Other Strengths And Weaknesses: I like the idea of converting the auto-team building tasks to a coding task. And ask GPT to write code to build team. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your appreciation of our idea and motivation. We would like to address your remaining concerns in the following. --- &nbsp; **Methods:** The motivation for choosing these 40 MAS pools is unclear. **A:** Sorry for the confusion. Let's clarify. (1) The key motivation for choosing these 40 MAS in the initial pool is for teaching the LLM the basic format for representing a MAS. These 40 MAS cover basic elements such as chain-of-thought prompts, role-playing prompts, LLM-calling functions, and functions to get code execution results. As the base LLM does not know our representation of MAS (verified by poor performance in Figure 5 b, N=0), including these basic elements is critical to teach the LLM our MAS representation. (2) MAS-GPT is not simply selecting MAS. For example, during the inference of MAS-GPT on GSM-Hard, 925 out of 1000 generated MAS are unseen from the training data, indicating that MAS-GPT is not merely selecting MAS but generating query-specific appropriate MAS. (3) These 40 MAS constitute only a subset of the full training data. Actually, these 40 MAS only serve as `seed MAS`. They will be evolved during the query-MAS pair refinement process, which indeed enlarges the pool with some random systems. We will include these in the revision. --- &nbsp; **Exp1:** The authors compare the cost by computing the number of the inferences ... better to include token cost. **A:** Thanks for the suggestion. Due to limited time, we report the following results and report more in our revision, where we see that MAS-GPT achieves **the best with the least token cost**. | |AgentVerse|DyLAN|MAS-GPT| |-|-|-|-| | LLM Calls | 12.05 (70B) | 12.96 (70B) | 1 (32B) + 6.44 (70B) | | Tokens | 8610 (70B) | 4874 (70B) | 1133 (32B) + 2127 (70B) | | Acc (%) | 59.36 | 60.54 | 64.47 | --- &nbsp; **Exp2:** It is fairer to compare the performance of the cross-domain or zero-shot capability. **A:** Thanks for the suggestion. Acturally, we have compared the performance of the cross-domain or zero-shot capability on datasets such as HumanEval, HumanEval+, GPQA, SciBench, and AIME-2024. Please note that all of these datasets are **only used for evaluation, but not training**. We would like to highlight that GPQA (graduate-level QA), SciBench (college-level scientific problems), AIME-2024 (mathematical competition) are all much harder than the training data. We also additionally conduct experiments on MedQA (medical domain). We can see that our method achieves **the best performance in these cross-domain setups**. These experiments also verify the generality of our MAS-GPT: it could generalize to domains unseen in training data and to much harder queries than those in training. | |HumanEval|HumanEval+|GPQA|SciBench|AIME|MedQA| |-|-|-|-|-|-|-| |DyLAN|79.01|75.78|35.98|19.79|53.33|76.34| |MAS-GPT|80.25|78.88|37.62|24.21|66.67|78.60| --- &nbsp; **Exp3:** the baselines are largely included in the system. For instance, the CoT is inside the 40 MAS pool. ... It would be better not to include the baseline in the pool to see the filter-out performance... **A:** Thanks for the comments. We would like to answer from three perspectives. (1) Firstly, our initial MAS pool only includes a few baselines with simple operation such as Co; but does not inlcude complicated baselines such as DyLAN. We include these simple elements to teach the LLMs to generate MAS in our desired format (also see response to methods). (2) Secondly, we would like to kindly inform the reviewer that including existing baselines into our initial MAS pool does not conflict with the rationale and motivation of MAS-GPT. One exciting and promising advantage of MAS-GPT is that it could `stand on the shoulders of giants`. Ideally, if we could include all existing high-performance MAS methods into the MAS pool, in real-world applications, we only need to deploy MAS-GPT to solve diverse user queries, rather than deploying multiple MAS and designing complicated rules to select them. (3) Following your suggestions, we exclude CoT from the MAS pool and re-run the experiments. We report the results in the following. From the table, we see that excluding CoT performs comparably with the current version. This result is reasonable because the performance of CoT is normal such that during the process of pair evaluation and selection, CoT is less likely to be paired with queries, resulting in few CoT samples in the training data. We truly believe in this new paradigm. With more diverse and high-performance MAS being included, we believe that MAS-GPT will be further advanced in a way similar to the advancement of ChatGPT: with better data and training techniques, the models become better. | |MATH|MMLU|GPQA| |-|-|-|-| |Before|68.65|78.38|37.62| |After|69.09|75.59|37.62| --- &nbsp; Overall, we hope that our responses can fully address your concerns and will be grateful for any feedback.
null
null
null
null
LLM Data Selection and Utilization via Dynamic Bi-level Optimization
Accept (poster)
Summary: The paper proposes a dynamic bi-level optimization framework to improve data selection and utilization during LLM training. The bi-level optimization includes updating model parameters using data weighted by a weighting model, and optimizing the weighting model based on the model’s updated performance. Experiments show the proposed method enhances training efficiency and model performance, and can transfer across models and data selection methods. ## update after rebuttal I have no more concerns, and recommend to accept this paper. Claims And Evidence: The paper claims the method improves model performance and training efficiency via dynamic data weighting. The evidence is shown in Table 1~4: the proposed DWM improve the accuracy across different model sizes and base data selection methods. Figure 3 shows DWM shifts weights from high-perplexity to diversity-focused data as training progresses. Methods And Evaluation Criteria: Yes, the paper compares the proposed method with a series of sota compression methods. The evaluation makes sense and reasonable. Theoretical Claims: No theoretical proof in the paper. Experimental Designs Or Analyses: The experiments are well organized, evaluating DWM on the diverse benchmarks, comparing against static data selection baselines (e.g, random, DSIR, MATES). The method is also transferred to larger models and other selection methods to validate generalizability. The ablation studies isolate the impact of bi-level optimization and dynamic weighting are well designed. Supplementary Material: No. Relation To Broader Scientific Literature: The work builds on static data selection methods and optimization frameworks like meta-learning, addressing their limitation of ignoring dynamic model preferences during LLM training. Deeper comparisons with adaptive sampling strategies like active learning and gradient-agnostic dynamic weighting approaches could further clarify its positioning. Essential References Not Discussed: No Essential References Not Discussed. Other Strengths And Weaknesses: Strengths: - The proposed method addresses the limitation of static data selection by dynamically adjusting data weights based on the model’s preferences. - The bi-level optimization framework can jointly consider effects of data samples and model update, improving data utilization effectiveness. - Extensive experiments demonstrates its applicability to LLM training and compatibility with other selection methods, enhancing its practical utility. Weaknesses: - Bi-level optimization may introduce significant computational costs, compared to the baseline static selection methods like DSIR. Please clarify this issue. - The paper does not clearly show the contributions of dynamic weighting and bi-level optimization respectively, making it unclear which aspect drives the improvement. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful and encouraging review. Below, we address your questions and concerns in detail. --- **Q1. The Computing Cost of the Bi-Level Optimization** **A1.** DWM does introduce additional computational overhead. We provide an analysis of this issue in our response to **R3Q1** and will include the discussion in revision. --- **Q2. The discussion of the Contributions of Dynamic Weighting and Bi-Level Optimization** **A2.** Sorry for this unclear expression. We compare the contributions of dynamic weighting and bi-level optimization in Sec.5.4 and Fig. 4 in the paper. We present the result of RANDOM, as well as RANDOM_DWM_W1 and RANDOM_DWM_W4, which apply the weighting models from the first and final stages, respectively, throughout the whole training stages. Note that the weighting model used in RANDOM_DWM_W1 or RANDOM_DWM_W4 is trained by the bi-level optimization. We also include our DWM method that dynamically learns the weighting model during training. As shown in Fig. 4, although using a single-stage weighting model obtained through bi-level optimization (RANDOM_DWM_W1 or RANDOM_DWM_W4) improves model performance, dynamically learning the weighting model (DWM) allows for adaptation to the model's evolving data preferences across training stages, resulting in more robust performance gains. Thank you for your suggestion, and we will revise the writing of the paper and explicitly highlight the contributions of these two components.
Summary: The paper introduces a Data Weighting Model (DWM) that dynamically adjusts data weights during LLM training using a bi-level optimization framework. DWM captures evolving data preferences by iteratively updating a weighting model based on validation performance. Experiments on 370M and 1.3B models demonstrate improved performance over static data selection methods, transferability across models and datasets, and insights into evolving data preferences. Claims And Evidence: Overall, both theoretical perspectives and solid empirical findings support to their claims. Methods And Evaluation Criteria: The methods in this paper are composed of a bi-level optimization framework, dynamic data weighting, and multi-stage alternative iteration. The bi-level optimization framework jointly optimizes the LLM and the weighting model, with the weighting model updated to maximize the trained model's performance on a validation set. Dynamic data weighting is implemented by DWM, which assigns weights to data samples within each batch, considering their interactions and the model's current preferences. In the training process, the multi-stage alternative iteration alternates between updating the weighting model and the LLM parameters in stages to capture dynamic data preferences. As for the evaluation criteria, the model performance is evaluated on nine downstream tasks under zero-shot and two-shot settings, with normalized accuracy as the metric. These criteria align with standard evaluation practices in the field of data selection methods. Theoretical Claims: The bi-level optimization approach effectively captures the dynamic data preferences of the model, improving data utilization and generalization. the weighting model learns to assign higher weights to data samples that are more beneficial for the model's performance, adapting as the model evolves during training. Experimental Designs Or Analyses: The paper presents a comprehensive experimental design. In terms of data selection, it utilizes randomly selected data and compares results with state-of-the-art data selection methods like DSIR and QuRating. For the model architecture, Llama-2 models with 370M and 1.3B parameters are employed, which are trained on 30B tokens selected from the SlimPajama dataset. The training setup uses LAMBADA as the validation set, splits training into five stages, and adopts a micro-batch size of 8. Moreover, an ablation study is conducted to compare models trained with static (fixed after early or late stages) versus dynamic weighting models, effectively highlighting the significance of continuous adaptation. Supplementary Material: The additional details are provided in the appendix, including implementation specifics and hyperparameter settings. There are also extended ablation study results comparing different weighting model configurations and validation tasks. Furthermore, there is further analysis of the weighting model's preferences across different data domains and training stages. Relation To Broader Scientific Literature: The authors contextualize their work well within the literature on data selection, such as Qurating, DSIR. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The authors propose a novel bi-level optimization framework that effectively tackles the limitations of static data selection by explicitly modeling dynamic interactions and preferences, which is a significant advancement in the field. 2. The paper demonstrates the transferability of DWM by achieving consistent improvements across larger models (1.3B) and a variety of data selection methods such as DSIR and QuRating, highlighting its formidable transferability and generalization ability. 3. The authors undertake comprehensive evaluation across 9 benchmarks under zero-/few-shot settings and supplements it with ablation studies that strongly validate the design choices, ensuring the robustness and reliability of the proposed approach. 4. This paper offers insightful analysis on how data preferences evolve during training, such as the shift from prioritizing expertise to writing quality in later stages, which adds a deeper understanding to the training dynamics. Weaknesses: 1. The bi-level optimization framework, while innovative, likely increases training costs. However, these additional costs are not quantified in the paper. In my opinion, the additional costs should be analyzed. Other Comments Or Suggestions: The costs of bi-level optimization framework should be analyzed. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful and encouraging review. Below, we address your questions and concerns in detail. --- **Q1. The Training Cost of the Bi-Level Optimization** **A1.** We would like to clarify that in DWM, we employ a bi-level optimization strategy on the 370M model to separately train the weighting model and the language model. Once training is completed, the learned weighting model can be directly transferred to larger models without additional training. Besides, using a trained data weighting model does introduce some computational overhead. Referring to [1], the training cost in FLOPs can be approximated as: **Training FLOPs ≈ C × Model Parameters × Token Count**, where the constant **C** depends on whether backpropagation is performed. In our case, since the weighting model only performs forward inference when assisting the training of larger models, **C** can be approximated as 2 (compared to 8 for full backpropagation). Therefore, when transferring the 370M weighting model to the 1.3B model, the additional training overhead is roughly 9%, and this overhead decreases as the size of the target model increases. Thanks, and we will add this discussion in revision. [1] Training Compute-Optimal Large Language Models, 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I like this work for the research area it explores. I will maintain my original rating. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the time and effort you dedicated to providing constructive feedback on our paper. Your insightful and helpful comments have offered valuable guidance for improving our work. Thanks!
Summary: This paper introduces a novel Data Weighting Model (DWM) to enhance data utilization during large language model (LLM) pre-training. DWM provides a dynamic data selection method by dynamically adjusting the weights of data samples within each training batch using a bi-level optimization framework. This framework trains a weighting model to learn the data preferences of the LLM as it evolves, allowing for more effective data utilization and improved model performance. The authors demonstrate that DWM can improves the performance of LLMs trained with randomly selected data and can also enhance existing data selection methods like DSIR and QuRating. DWM presents a promising approach for optimizing data selection and utilization. Claims And Evidence: Yes. The experiments do support the inclusion of the method to improve performance. Methods And Evaluation Criteria: Yes. The datasets used for evaluation and training are reasonable. The models are reasonable however small or outdated. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: It seems that with addition of DWM, the performance on some tasks improve but on others decrease significantly (Table 4). It is not clear how to understand the benefit of having DWM and disadvantage. The reliance on the LAMBADA as the sole validation set seems limiting. Supplementary Material: Appendix Relation To Broader Scientific Literature: The contribution of data selection is important in general and provides another method for data selection. Essential References Not Discussed: Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. Don’t stop pretraining: Adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964, 2020 Lin, Zhenghao, et al. "Rho-1: Not all tokens are what you need." arXiv preprint arXiv:2404.07965 (2024). Liu, Qian, et al. "Regmix: Data mixture as regression for language model pre-training." arXiv preprint arXiv:2407.01492 (2024). Yang, Yu, et al. "Smalltolarge (s2l): Scalable data selection for fine-tuning large language models by summarizing training trajectories of small models." Advances in Neural Information Processing Systems 37 (2024): 83465-83496. Mirzasoleiman, Baharan, Jeff Bilmes, and Jure Leskovec. "Coresets for data-efficient training of machine learning models." International Conference on Machine Learning. PMLR, 2020. Yang, Yu, Hao Kang, and Baharan Mirzasoleiman. "Towards sustainable learning: Coresets for data-efficient deep learning." International Conference on Machine Learning. PMLR, 2023. Other Strengths And Weaknesses: Strengths: - The generalizability of the method to be adapted to other selection methods is beneficial. - The method shows some improvements in performance when adapted. Weaknesses: - The method requires an additional model for weighting requiring additional computation. - Some missing literature as well as baseline methods to study (provided in the list of missing references). - The missing baseline is missing (provided in missing literature, e.g., Data Shapley). - The method does not provide much improvement in performance. - Newer benchmarks could be used for evaluation of the model. Other Comments Or Suggestions: Figures can be more readable (Figure 2). Time and memory complexity could be provided in the paper. Adding more qualitative understanding (through examples) of the model's dynamic changes could be helpful to better understand the weights. Questions For Authors: It would be interesting how much additional random tokens are required to match the performance with DWM with less tokens. Would computational time be saved? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Review 2 Rebuttal --- Thank you for your thoughtful and detailed review. Below, we address your questions and concerns in detail. --- **Q1. The Missing Literature** **A1.** Thanks for your suggestions, and we have added the discussion of these related work below as well as in revision. Existing data selection methods fall into three categories: (1) **Token-level**, which filters individual tokens (e.g., Rho-1); (2) **Group-level**, which mixes data pools (e.g., Regmix); and (3) **Sample-level**, which selects individual examples. Sample-level methods include heuristic or learning-based approaches (e.g., TAPT, S2L, DSIR, QURATING) and theoretically grounded coreset-based methods (e.g., IG, CREST). Our method also belongs to the sample-level category but differs by modeling the model’s dynamic data preferences and capturing joint data effects during training. Through data weighting, DWM improves data utilization and can be easily transferred across models or combined with other selection strategies. --- **Q2. The Missing Baseline** **A2.** Thanks for your suggestion and we provided the dicussion and comparion of Data Shapley. Unlike DATA Shapley, our method (1) generalizes via end-to-end learning without per-dataset recomputation, and (2) optimizes the model directly rather than estimating fair contributions. For comparison, we trained a Shapley-based weighting model on the same data as DWM. DWM outperforms it, highlighting the benefit of learning data utility directly from the model. | method | arc-c | arc-e | boolq | hellaswag | logiqa | obqa | piqa | sciq | winogrande | avg | | :------------- | :---: | :---: | :---: | :-------: | :----: | :--: | :--: | :--: | :--------: | :--: | | random-shapley | 24.3 | 44.5 | 54.2 | 36.3 | 24.6 | 28.8 | 64.2 | 77.7 | 52.2 | 45.2 | | random-DWM | 24.7 | 46.8 | 56.6 | 36.5 | 25.8 | 28.2 | 65.0 | 80.5 | 53.4 | 46.4 | --- **Q3. The Reliance on the LAMBADA** **A3.** We would like to emphasize that DWM is not heavily dependent on the validation set. DWM uses LAMBADA as validation due to its common use in language model pretraining [1–2]. Other reasoning datasets, such as HellaSwag (Training set), also serve well as shown below, where DWM trained with HellaSwag validation maintains strong performance on the 370M model. | method | arc-c | arc-e | boolq | hellaswag | logiqa | obqa | piqa | sciq | winogrande | avg | | :--------------------- | :---: | :---: | :---: | :-------: | :----: | :--: | :--: | :--: | :--------: | :--: | | hellaswag-training set | 24.7 | 46.8 | 56.9 | 36.5 | 26.3 | 28.2 | 64.7 | 80.9 | 51.5 | 46.3 | | lambada (DWM) | 24.7 | 46.8 | 56.6 | 36.5 | 25.8 | 28.2 | 65.0 | 80.5 | 53.4 | 46.4 | --- **Q4. The Experimental Results** **A4.** Here we address the concerns related to the experimental results,which are obtained using the same model architecture or benchmark as in existing methods [2,3]. **1. The Performance Improvement.** We would like to clarify that the performance gains of the DWM algorithm on average can match or even surpass the results reported in reference [1,3,4], demonstrating the effectiveness of our method. **2. The Performance Decrease of DWM in Partial Downstream Tasks** We would like to emphasize that DWM is trained on the 370M model, and it improves the performance of the 370M model on nearly all downstream tasks. The performance drops on partial downstream tasks primarily occur when transferring the trained DWM model to specific model–data combinations, which is mainly caused by incompatibility between the training model and the training data. More detailed explaination can be found in our reply to **R1 Q3**. --- **Q5. Training Cost DWM** **A5.** DWM does introduce additional computational overhead. We provide an analysis of this issue in our response to **R3Q1** and will include the discussion in revision. In addition, We show that DWM trained on 30B tokens matches the performance of a 370M model trained on 48B random tokens, yielding a 1.6× gain in training efficiency. | stages | arc-c | arc-e | boolq | hellaswag | logiqa | obqa | piqa | sciq | winogrande | avg | | :---------- | :---: | :---: | :---: | :-------: | :----: | :--: | :--: | :--: | :--------: | :--: | | random-48B | 24.9 | 46.9 | 58.4 | 38.1 | 26.4 | 28.8 | 65.2 | 78.6 | 51.1 | 46.5 | | DWM | 24.7 | 46.8 | 56.6 | 36.5 | 25.8 | 28.2 | 65.0 | 80.5 | 53.4 | 46.4 | [1] DsDm: Model-Aware Dataset Selection with Datamodels, 2024. [2] MATES : Model-Aware Data Selection for Efficient Pretraining with Data Influence Models, 2024. [3] QuRating: Selecting High-Quality Data for Training Language Models, 2024. [4] Data Selection for Language Models via Importance Resampling, 2023.
Summary: This work proposes DWM, to address the limitations of existing data selection methods that ignore dynamic model training dynamics during LLM pre-training. Based on a bi-level optimization, DWM adaptively sets the weights of each data sample in a batch. In experiments, as a plug-and-play module, DWM improves the performance in downstream tasks across various settings. Claims And Evidence: 1. In introduction, the authors claim the training on all available data may not be optimal and training increase the financial costs, but the authors do not conduct efficiency test on the proposed methods. On the contrary, it seems that the additionally component will even introduce much more training time, which should be properly discussed in experiments. 2. Moreover, the authors discuss the existing methods for data selection, such as selection before training and without considering the data samples in a batch indiscriminately. However, experiments show that, random sampling with DWM cannot outperform the SOTA, so I think the authors should also discuss the potential limitation of DWM. Methods And Evaluation Criteria: The method design and evaluation criteria are generally convincing. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. While in average, we observe the performance improvement, the performance of lots of tasks also decreases with DWM. Could the authors also provide some explanations which may potentially instrumental to the improvement of DWM? 2. In few-shot learning, why do authors only focus on 2 shots? What is the effect of the number of samples? 3. Moreover, the number of stages are not properly ablated. 4. Why do the authors report Table 1 and Table 2 with the 370M model? Since the authors have trained 1.3B model, reporting the performance of 1.3B model in different stages is much more convincing given that the paper focuses on LLM. Supplementary Material: Reviewed. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: The proposed method is sound and the research topic is interesting and important. Weaknesses: See claim and experiments setting. Other Comments Or Suggestions: Line 261, 'dose' -> 'does' Questions For Authors: If the authors are able to solve the problems in experiments and claims, I would possibly adjust my rate. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and valuable review. Below, we address your questions and concerns in detail. --- **Q1. Training Cost of DWM** **A1.** DWM does introduce additional computational overhead. We provide an analysis of this issue in our response to **R3Q1** and will include the discussion in revision. --- **Q2. The Potential Limitation of DWM** **A2.** As a plug-and-play module, DWM captures the model’s dynamic data preferences during training and can be applied to either randomly sampled or curated data. However, its performance is ultimately bounded by the quality of the applied data. Notably, the SOTA (QURATING) leverages data selected from 260B tokens using knowledge of GPT-3.5-turbo, offering significantly higher quality than the 30B randomly sampled tokens. As a result, DWM on random data cannot surpass QURATING, but when applied to QURATING data, it further enhances performance. We will include this discussion in the revision. **Q3. The Performance Decrease of DWM in Partial Downstream Tasks** **A3.** First, we clarify that DWM is trained on the 370M model using randomly selected data, and it consistently improves performance across nearly all downstream tasks under both zero-shot and two-shot settings. At most, a slight performance drop occurs on a single task, a phenomenon also reported in prior work [1][2], suggesting possible optimization conflicts among downstream tasks. Second, performance degradation on certain tasks mainly arises when transferring the DWM (trained on 370M with random data) to the 370M model with QURATING data or the 1.3B model with DSIR data. This results from incompatibility between the model and the data during training. As discussed in the paper (line 354) and in reference[3], models of different scales vary in their capacity to absorb high-quality reasoning data. Furthermore, as shown in Sec. 5.3 (line 401) and Fig. 3, DWM encourages data diversity in early stages and gradually shifts toward expert and instructional data. For the 370M model with QURATING data, high-quality reasoning data quickly saturates learning, leaving little room for further improvement through DWM. Although larger models have greater capacity, DSIR data (knowledge-centric text) similarly limits DWM's optimization effect on the larger 1.3B model. Consequently, DWM yields marginal average improvements in these transfer settings, resulting in mixed task-wise outcomes. Notably, DWM performs well on the 370M model with DSIR data and the 1.3B model with QURATING data, which supports our hypothesis. These observations indicate that DWM’s effectiveness remains constrained by the applied data, highlighting the need for more robust weighting strategies adaptable to diverse data types. Thanks. We will add this discussion in revision. --- **Q4. The Two-Shot Setting** **A4.** We focus on the two-shot setting referring to exsiting data selection methods[4] to analyze the model's capacity for reasoning and generalization. In general, an appropriate number of examples can help the model better understand the task. Moreover, for models with limited capacity, an excessive number of examples may lead to overfitting on the samples and a decline in generalization ability. --- **Q5. The Ablation of the Number of Stages** **A5.** Thanks. We provide the ablation studies of this number below, where increasing the number of stages facilitates DWM in better capturing the model's dynamic data preferences, but it also introduces additional training overhead for the weighting model. The results show that setting the number of stages to 5 achieves a good balance between performance and efficiency, indicating that the model preferences may not dramatically change within a single stage. We will add this portion in revision. | stages | arc-c | arc-e | boolq | hellaswag | logiqa | obqa | piqa | sciq | winogrande | avg | | :--------------- | :---: | :---: | :---: | :-------: | :----: | :--: | :--: | :--: | :--------: | :--: | | 2-stages | 24.5 | 43.9 | 57.5 | 35.1 | 25.0 | 27.6 | 64.7 | 76.5 | 52.9 | 45.3 | | 8-stages | 25.5 | 46.3 | 60.1 | 36.4 | 25.2 | 28.8 | 64.9 | 77.2 | 53.7 | 46.5 | --- **Q6. Why Report the Stage-Wise Performance of 370M Model** **A6.** In our paper, we perform bi-level optimization on a 370M model, where the language model and the weighting model are trained separately. The trained weighting model is then directly transferred to a larger 1.3B model. Therefore, we report the stage-wise performance of the 370M model in Table 1 and Table 2 to analyze the effect of this bi-level optimization on model training. [1] Data Selection for Language Models via Importance Resampling, 2023. [2] QuRating: Selecting High-Quality Data for Training Language Models, 2024. [3] Small Models Struggle to Learn from Strong Reasoners, 2025. [4] MATES : Model-Aware Data Selection for Efficient Pretraining with Data Influence Models, 2024.
null
null
null
null
null
null
Constrained Pareto Set Identification with Bandit Feedback
Accept (poster)
Summary: This paper studies the fixed-confidence setting of the Pareto Set under linear feasibility constraints in a multi-objective bandit setting. The authors propose an algorithm and establish its near-optimal theoretical guarantees through information-theoretic lower bounds in the worst case, and validate their approach through extensive experiments. Claims And Evidence: I did not identify any errors in the theoretical claims presented. However, I discuss certain limitations of these theoretical results in 'Other Strengths and Weaknesses.' Regarding the experimental claims, please refer to my detailed comments provided in the 'Questions for Authors' section. Methods And Evaluation Criteria: In the experiment, it makes sense to compare the empirical stopping time. However, it would be better to also show the empirical failure rate. Theoretical Claims: I do not check the proof. Experimental Designs Or Analyses: The authors indicate that they conducted 500 runs for the experiments presented in Figure 4, and 250 runs for those in Figure 6. Could the authors also report the exact number of successful runs achieved by the agent in e-cPSI (or cPSI), and clarify whether these results align with the specified thresholds of $\delta = 0.1$ (Figure 4) and $\delta = 0.01$ (Figure 6)? Furthermore, to ensure a fair comparison, could the authors confirm whether the number of failures for e-cPSI (or cPSI) is approximately consistent across the evaluated algorithms in these experiments?" Supplementary Material: N/A Relation To Broader Scientific Literature: The primary contribution of this paper lies within the domain of multi-objective multi-armed bandits. While most prior studies have concentrated on regret minimization or unconstrained Pareto set identification, this paper presents the first investigation into constrained Pareto set identification, even within the simpler setting of linear constraints. Essential References Not Discussed: I do not identify Essential References Not Discussed. Other Strengths And Weaknesses: ## Strengths: In my view, the key contributions of this paper are captured by Theorems 4.3 and 4.4. Specifically, Theorem 4.3 establishes a non-asymptotic upper bound, while Theorem 4.4 provides a corresponding lower bound, which is particularly valuable as it matches the upper bound of Theorem 4.3, albeit in a worst-case scenario. ## Weaknesses: 1. The motivation behind introducing linear constraints is not sufficiently clear. Although the authors provide an illustrative example concerning clinical trials, this example lacks detailed specifics to fully justify the necessity and relevance of incorporating linear constraints. 2. The practical applicability of the proposed algorithms appears significantly limited regarding the parameter $\delta$. According to Theorem 4.3, the algorithm is guaranteed to be $\delta$-correct only when $\delta < d^2/5^d$. For instance, setting $d=10$ imposes a condition $\delta < 10^{-5}$, severely restricting the range of realistic scenarios where this method could be applied effectively. 3. Within the main text, there is a noticeable absence of theoretical analysis specifically addressing the cPSI performance of the proposed algorithm. It would be beneficial to clarify whether the presented algorithm achieves near-optimality in terms of cPSI, or it needs a significant modification on the proposed algorithm to adapt the task of cPSI. Other Comments Or Suggestions: Line 85: Algorithm XX is a typo to be fixed. Questions For Authors: 1. In Section 5, the authors propose a two-stage algorithm comprising feasible set identification and Pareto set identification. How is the confidence level $\delta$ configured in each of these two stages? Additionally, is historical data from the first stage utilized in the second stage? 2. Could the authors clarify the meaning of the phrase "until a correct set is identified with high probability" in the context of the baseline method labeled "Uniform"? 3. The authors state that the experiments depicted in Figure 4 were run 500 times. Could they also report how many times the agent succeeded in achieving e-cPSI, and whether this empirical success rate aligns with the theoretical confidence level of $\delta = 0.1$? Furthermore, for a fair comparison, are the failure rates approximately consistent across different algorithms? 4. What is the precise relationship between $T^*_{\mathcal{M}}$ and $C^*_{\mathcal{M}}$ in the worst-case scenario discussed in Theorem 4.4? 5. What is the computational complexity of the proposed algorithm? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful feedback and detailed evaluation of our work. Below, we address each point raised by the reviewer. - Weaknesses 1. Linear constraints are widely used in applications like dose-finding trials, where balancing efficacy and toxicity is crucial (Mark C et al., 2013). Appx G.2 (Tab. 5) includes additional results on more complex set of linear constraints. Our guarantees also extend to general convex constraints, but we focus on polyhedral ones for their practicality and efficiency. 2. We would like to clarify that this condition was actually needed only for the sample complexity bound and not for the correctness (for any $δ$, e-cAPE outputs a correct answer with probability larger than $1−δ$). We acknowledge that this was unclear from our statement of Thm 4.3. This condition is actually loose and can be removed (at the cost of a second-order term in the sample complexity independent from $δ$). 3. e-cAPE is mainly designed e-cPSI, but any $𝛿$-correct algorithm for e-cPSI is also $𝛿$-correct for cPSI; so is e-cAPE. However, as both tasks are different, e-cAPE is not necessarily optimal for cPSI. As noted in Section 3.1, we provide a dedicated algorithm for cPSI in Appx E, which is optimal as $𝛿\to 0$. An experiment in Appx G.2 compares e-cAPE to this cPSI-specific algorithm for $𝛿=0.01$. e-cAPE performed slightly better in practice despite not being tailored for cPSI. If given additional space in a revision, we would include more details on cPSI in the main text to further clarify its theoretical and practical aspects. - **Questions** 1. We set $𝛿/2$ for each stage to maintain an overall $𝛿$-correct guarantee. We did not retain historical data from the first stage (feasibility identification) to the second stage (Pareto set identification) as it is unclear whether the claimed sampled complexity bound holds if we do. Through experiments on synthetic instances, we actually observed that reusing historical data from the first stage often introduced bias in the second stage, leading to an overall increase in sample complexity. Exploring more principled ways to leverage historical data while mitigating bias is an interesting direction for future work. 2. By this, we mean that the algorithm runs until the stopping condition specified in Equation (7) is met. We clarify that the "Uniform" baseline shares the same stopping criterion as e-cAPE; the key difference lies in their sampling strategies. Specifically, while e-cAPE adaptively allocates samples, the "Uniform" algorithm follows a round-robin sampling scheme, allocating roughly equal samples to all arms. 3. For $𝛿=0.1$, we report below the empirical success rate of each algorithm ||e-cAPE|Uniform(U)|MD-APT+APE(A-A,Two Stage)|Racing algorithm(R-CP)| |---|---|---|---|---| |covboost|100%|100%|100%|96%| |secukinumab|100%|100%|100%|100%| All algorithms achieve similar success rates, and e-cAPE, R-CP, A-A, implement the same confidence bonus function. "Uniform" and e-cAPE share the same stopping rule. As detailed in Appx G, in the implementation, we used the confidence thresholds advertised in Auer et al (2016). Although tighter than the one allowed by the theory, they remain very conservative. 4. $T^*_{M}$ captures the hardness of e-cPSI in the asymptotic (when $𝛿\to 0$, cf Prop 3.4). As noted in section 3.2, an extension of Degenne and Koolean (2019) to e-cPSI would yield an optimal (as $𝛿\to 0$) yet impractical algorithm (as it needs to solve up to $2^K$ max-min problems similar to Eq.(2) at each round, none having a closed form). This algorithm will satisfy $\lim_{𝛿\to 0}\frac{\mathbb{E}_{{\mu}}[τ_𝛿]}{\log(1/𝛿)}=T^*\_{M}$. On the other side, $C_M^*$ is the leading complexity term in our lower bound and in the sample complexity of e-cAPE, for which the guarantees are non-asymptotic. By combining Thm 4.3 and 4.4, on the class of problems $\tilde{D}$ (explicit in appx B), we have $$C_M^*(\mu)/4⩽ T^*_M(\mu)⩽256C_M^*(\mu),$$ which shows that $C_M^*(\mu)$ is a reasonable complexity proxy for problems in $\tilde{D}$ (up to some improvable constants). 5. The computational complexity of e-cAPE is mainly determined by computing the squared Euclidean distance $dist(x,P)^2$, which we solve using MOSEK (via CVXOPT) in $O(\max(m,d)^3)$ with $d$ the dimension and $m$ the number of constraints. Computing $dist(x,P^c)$ takes $O(md)$. If the state (quantities $M(i,j;t)^\pm_{i,j},(γ_i(t))_{i}$,) is updated, $b_t,c_t$ are computed in $O(K)$. As only the means of $b_t,c_t$ change from $t$ to $t+1$, updating the state requires : * $O(n_t d+md)$ to update the feasible Pareto set ($n_t$: size of the feasible set). * $O(K)$ to update $M(i,j)^\pm$ * $O(\max(m,d)^3+\max(K,m)d)$ if $b_t$ or $c_t$ is empirically infeasible, otherwise $O(\max(K,m)d)$ to update $(γ_i(t))$. Per iteration, the cost is linear when $b_t,c_t$ are feasible; otherwise, cubic in $m$. Memory complexity is $O(K^2)$ to store the state. --- Rebuttal Comment 1.1: Comment: The experiments could be better, as the $100%$ success rate is too conservative for a fair comparison. However, considering this is a theoretical paper, I am satisfied with the overall response and have increased my score from 2 to 3. I would encourage the authors to incorporate the above discussion into the revised paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer again for taking the time to review our paper and engage with our rebuttal. We are pleased that the reviewer found our clarifications helpful and sincerely appreciate the updated score and thoughtful feedback. We also take note of the reviewer’s comments on the experimental setup and will incorporate the suggested discussion into the revised version.
Summary: This paper studies the constrained Pareto set identification (cPSI) problem (with explainability). More specifically, the authors focus on the $(\epsilon, \delta)$-PAC learning setting, and the objective of the agent is to identify a partition of the arm set into three sets (the Pareto set, a set of suboptimal arms, a set of infeasible arms). They derived instance-dependent lower bounds of the sample complexity for cPSI and e-cPSI. Then, they proposed an algorithm for e-cPSI termed e-cAPE. They provide the sample complexity analysis of e-cAPE and prove that it is nearly optimal for some problem instances, which implies that e-cAPE is nearly optimal in a worst-case sense. Using real-world datasets, they empirically show that the proposed method achieves smaller sample complexities for some problem instances. ## update after rebuttal Since my concerns (questions) have been resolved by the author's response, I will keep my positive score. Claims And Evidence: Claims are supported by theoretical results (analysis on lower and upper bounds) and experimental results. Methods And Evaluation Criteria: Theorem 4.4 shows the proposed method is nearly optimal for some problem instances and the evaluation metric for experiments is the sample complexity, which is standard for the PAC problem. Theoretical Claims: I have not checked proofs. At least, lower bounds seem standard results and valid. Experimental Designs Or Analyses: The experiments are conducted using real-world datasets and baselines seem valid. Supplementary Material: I have not checked the supplementary material. Relation To Broader Scientific Literature: As the authors state cPSI and e-cPSI are practically important problem settings, such algorithms would be beneficial to the papers outside the ML community. Essential References Not Discussed: To the best of my knowledge, related works are adequately discussed. Other Strengths And Weaknesses: - Strengths - The authors study practically important problem settings (cPSI and e-cPSI). - Experiments are conducted on real-world datasets. - The proposed method is near-optimal in a worst-case sense. - Weaknesses - Although the proposed method is near-optimal for some problem instances, it is a worst-case analysis. I suspect, a very simple algorithm (such as uniform exploration) has a similar property. Other Comments Or Suggestions: Is an algorithm for cPSI asymptotically optimal? If so, since the current version of this manuscript has some space, you can include the claim even in the submitted version (and a revision). Questions For Authors: 1. As I wrote in "weaknesses", I think a uniform exploration algorithm can achieve an optimality in a worst-case. Is it possible to theoretically compare Algorithm 1 to such a simple algorithm? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and for taking the time to evaluate our work. Below, we address each of the points raised in the review. * **Optimal algorithm for cPSI** We present an asymptotically optimal algorithm for cPSI in Appendix E. Since we believed that e-cPSI was better suited for the applications we focused on, we initially placed the cPSI algorithm in the appendix. However, as suggested by the reviewer, we can provide more details on this algorithm in the main text if space allows in the revision. * **Uniform exploration** We agree that an algorithm using uniform exploration could be worse-case optimal for certain very particular configurations of arms (but probably a much more reduced set of instances compared to the one constructed in our lower bound). Still, e-cAPE is designed to perform more efficiently by focusing on arms that are more likely to be part of the feasible Pareto Set rather than exploring uniformly. This adaptive exploration typically leads to more efficient identification of the Pareto Set, especially when the number of arms is large. In Appendix F, we analyze the limitations of an algorithm that performs uniform exploration and additionally discards some arms when their status can be deduced from confidence boxes. We show that for this algorithm (which is expected to be even better than pure uniform sampling due to the additional eliminations), there are some configurations where its sample complexity scales with a quantity that is order $K$ times larger than that of e-cAPE; with $K$ the number of arms. This provides some theoretical insights as to why the sampling rule of e-cAPE leads to a lower sample complexity compared to uniform sampling. Moreover, we do illustrate empirically the benefits of e-APE compared to both uniform sampling (see e.g. Table 5 in Appendix G.2) and uniform sampling with eliminations (see Figure 6,7 in Appendix G.2). --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. My question on uniform exploration has been resolved. I will keep my current score.
Summary: This work studies bandit pareto set identification with constraints. In particular, the task is to choose arms to pull in each round until the set of feasible and pareto arms is identified with probability at least $1 - \delta$. They give an algorithm that addresses this problem ## update after rebuttal In the rebuttal, the authors clarified the meaning of $\lambda_\alpha$, and provided an additional result that doesn't require that $\delta$ shrinks exponentially in $d$. Regarding the clarification of $\lambda_\alpha$, I found the authors' explanation to be sound and quite helpful in general. Regarding the additional result, I do think that this alleviates some of the restrictions of the original result. However, one downside of this result is that we can't say much about the tightness (although this is not true in the reasonable setting where $\delta$ is small). Overall, I maintain my original score, as I think the results are good overall, but there are some points of weakness. Claims And Evidence: I have some concerns with the interpretation of the sample complexity bound. In particular, I'm not sure the claim of "near-optimal" is justified given my concerns detailed in \# 1 and \# 2 in the Questions box. Methods And Evaluation Criteria: Overall, the approach seams reasonable, and the use of sample complexity to evaluate the algorithm is reasonable. Theoretical Claims: I did not check the proofs, but have some concerns with the interpretation of the theoretical results as discussed in the Question box \# 1 and \# 2. Experimental Designs Or Analyses: There are no experiments. Supplementary Material: I did not check the supplementary material. Relation To Broader Scientific Literature: To my knowledge, this works contributes a new problem setting to the literature, combining the pareto set identification problem with the feasible set identification problem. Accordingly, their approach appears to be novel. Essential References Not Discussed: None that I saw. Other Strengths And Weaknesses: No others. Other Comments Or Suggestions: 1. I would suggest putting a definition of the Pareto optimal set $O^*$ in Section 1.1. It would be preferable for this to appear before it is referenced in line 97. 2. There is an unfilled reference XX in line 85 right side. Questions For Authors: 1. The sample complexity guarantees in Theorem 4.3 are shown to include a term $\lambda_\alpha$ which looks to be $\tilde{O}(\sum_{T \geq 1} \frac{1}{T^{\alpha-1}})$ where $\alpha$ is a free quantity restricted to $\alpha > 2$. It seems that the claimed regret bounds would require that $\lambda_\alpha = \tilde{O}(1)$, but I don't see how this would be the case when the factor $\sum_{T \geq 1} \frac{1}{T^{\alpha-1}}$ could be as large as $\sum_{T \geq 1} \frac{1}{T}$. Maybe it is just not clear to me what the summation $\sum_{T \geq 1}$ is over exactly. 2. The sample complexity guarantees in Theorem 4.3 also restrict the confidence level $\delta$ to be in the range $\delta \leq \frac{d^2}{5^d}$, which depends exponentially on the dimension. This seems to be highly restrictive, especially in high dimension settings. Can this be avoided? I didn't see this requirement in related work. 3. Can the approach be extended to convex sets? I did not identify any specific arguments that restricted the algorithm or analysis to polytopes. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful feedback and detailed evaluation of our work. Below, we specifically address each point raised in the review. 1. $α$ is a parameter of the algorithm. We introduced generic confidence bonuses in l.255-l.257, and Thm 4.3 upper-bounds the sample complexity of e-cAPE when it is run with confidence bonuses in the form described in the statement (depending on $α$). In practice, $\alpha$ is set at the beginning of the algorithm. We give additional clarification on $Λ_α$ in the answer to question 2 below. 2. We thank the reviewer for this insightful comment. The condition on $δ$ is actually loose and can be removed (at the cost of an additional second-order term, independent from $δ$). First, we would like to mention that this condition was actually needed only for the sample complexity bound and not for the correctness of the algorithm (e-cAPE can be run for any $δ$ and outputs a correct answer with probability larger than $1−δ$). We acknowledge that this is unclear from our current statement of Thm 4.3. The term $5^d$ that appears in the calibration function $g(t,δ)$ is due to the covering number of the unit sphere, which arises from L2 norm concentration with the covering technique. The condition on $δ$ appeared from a sub-optimal upper bound on $g(t,δ)$ in the proof, specifically in l.1300. To see how this can be improved, we sketch below the proof of Thm 4.3 * **Bounding the stopping time under consecutive good events** Let $τ$ be the (random) stopping time of e-cAPE, and observe that for any $T>0$, $$\min(τ,T)⩽T/2+\sum_{t=T/2}^T\mathbb{1}_{(τ>t)}.$$ In Proposition C.3, we show that if the good event $E_{t}$ (defined in l.255-256) holds and e-cAPE does not stop at round $t$, then for any correct answer $(S,I)\in M$, either $b_t$ or $c_t$ is underexplored (i.e. not pulled enough wrt some gaps that are function of $S, I$). This is expressed by saying that $\\{b_t,c_t\\}\cap W_t(S,I)\neq ∅$; $W_t(S,I)$ is defined in equations 37-39 (l.980-83). Assuming $$E^T:=\bigcap_{\frac{T}{2}⩽ t⩽T}E_t$$ holds and fixing an arbitrary correct answer $(S,I)$, we have $$ \min(τ,T) ⩽ T/2+\sum_{a=1}^K\sum_{t=T/2}^T\mathbb{1}_{((b_t=a\lor c_t=a)) \land (a\in W_t(S,I))}.$$ Introducing for any subset $U\subset[K]$, $$R(U,T) :=\sum_{a\in U}\sum_{t=T/2}^T\mathbb{1}_{((b_t=a\lor c_t=a)) \land (a\in W_t(S,I))},$$ it follows from the definition of $W_t(S,I)$ (equation 37-39, l.980-83) that $$R(S,T)⩽\sum_{a\in S}\frac{32σ^2}{\Delta_a^{2}(S)}f(T,𝛿);R(I,T)⩽\sum_{a\in I}\frac{8σ^2}{\eta_a^2}g(T,𝛿)$$ and $$R(O^*,T)⩽\sum_{a\in O^*}σ^2\max(\frac{8g(T,𝛿)}{\eta_a^2},\frac{32f(T,𝛿)}{Δ_a^2(S)}).$$ * **Former sub-optimal step and novel formulation of Thm 4.3** At this step, the idea was to write $R(O^\star)$ a sum of terms in the form $\sum_{a\in O^*}\max(\frac{1}{Δ_a^2(S)},\frac{1}{\eta_a^2})h(T,𝛿)$ (this would further make appear the complexity term $C(S,I)$, cf Eq.11 (l.311)). Recalling $$f(T,𝛿)=\log(\frac{2k_1KdT^α}{𝛿})\text{ and }g(T,𝛿)=4\log(\frac{2k_1K5^dT^α}{𝛿}),$$ this is where we used the condition: as for $𝛿⩽ d^2/5^d$ we have $\log(5^d/𝛿)⩽2\log(d/𝛿)$ and $g(T,𝛿)⩽ 8f(T,𝛿)$; and we set $h(T,𝛿)=8f(T,𝛿)$. To fix this loose step and remove the restriction on $𝛿$, observe that $$R(O^*,T)⩽\sum_{a\in O^*}32σ^2\max(\frac{1}{\eta_a^2},\frac{1}{Δ_a^2(S)})f(T,𝛿)+Q(\mu,O^*)$$ where $Q(\mu,U)=\sum_{a\in U}32σ^2\frac{\log(5^d/d)}{\eta_a^2}$. Applying the modification above and following the remaining proof of Thm 4.3, we prove that for any $𝛿\in(0,1)$, $$\mathbb{E}[τ]⩽256σ^2 C^*(\mu)\log(128σ^2C^*(\mu)(2k_1Kd/𝛿)^{1/α})+4Q(\mu,F^c \cup O^*)+Λ_α$$ where $F$ is the feasible set. * **Additional Clarification on $Λ_α$** By definition, we have (l.1330) $Λ_α=\sum_{T⩾ 1}\mathbb{P}((E^T)^c)$. We showed above that when $E^T$ holds, $$\min(τ,T)⩽ T/2+R([K],T)$$ then, letting $\tilde T$ such that $\forall T⩾\tilde T,R([K],T)<T/2$, for $T⩾\tilde T$, $τ >T⟹ (E^T)^c$. Thus, $$\mathbb{E}[τ]⩽\tilde T+\sum_{T⩾\tilde T}\mathbb{P}((E^T)^c)⩽\tilde T+Λ_α,$$ and $Λ_α$ is bounded using Lem D.3 in appendix. 3. We appreciate the reviewer's insightful observation. Indeed, our approach could be extended to general convex feasible sets. However, the main challenge would be computational. Specifically, for a feasible set $P$, our algorithm should compute at each iteration, quantities such as $dist(x,P)^2$ (squared Euclidean distance to $P$; convex quadratic program with linear constraints) and $dist(x,P^c)^2$, which is not always convex, making it significantly more complex in general. In the special case of polyhedral sets, the latter distance has a closed-form expression, simplifying computations considerably. Additionally, another computational aspect (albeit minor) is the verification oracle for membership in $P$. Given that polyhedral sets encompass many practical scenarios while ensuring efficient algorithmic costs, we chose to focus the presentation on this setting. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed discussion and walking me through the proof steps. I found this alternative proof method convincing and have a clearer picture on the results in general. Is there anything that we can say about the tightness of this modified bound? The additional term is something like $d \sum_a \frac{1}{\eta_a^2}$. This seems to be incomparable to $C_M^*$. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the follow-up and are glad the clarification helped. Regarding the additional term $d \sum_{a \in O^\star \cup F^c} \frac{1}{\eta_a^2}$, we agree that in general it is not directly comparable to $C_M^\star$ which involves gaps tied to Pareto set identification. However as it is $\delta$-independent, when $\delta$ is small the leading term will be $C_M^\star$, which is also the term featured in our lower bound. The tightness of this second-order term depends directly on the tightness of the L2 norm concentration (i.e. on the choice of the confidence function $g(t,\delta)$). With standard covering arguments, it scales with the covering number of the unit sphere, which is exponential in dimension. This can be improved under stronger assumptions on the distributions of the arms. While our analysis assumes marginal-wise $\sigma$-sub-Gaussianity (a standard assumption in multi-objective bandits, e.g., Auer et al. 2016), tighter bounds (for smaller calibration functions $g(t,\delta)$) can be obtained under stronger assumptions: * **multivariate Gaussian with known covariance**: if we assume each arm to be multivariate Gaussian with known covariance, using Hanson-Wright concentration for Gaussian vectors (see (Rudelson and Vershynin 2013, Hanson-Wright inequality and sub-gaussian concentration) ) will improve the dependency in the log from exponential in the dimension to the operator norm of the covariance matrix. * **Independent marginals** : Kaufmann and Koolen 2021 (in Mixture Martingales Revisited with Applications to Sequential Tests and Confidence Intervals) provides refined concentration bounds on the sum of KL divergences from which tighter L2-norm concentration can be deduced for random vectors with independent sub-gaussian marginals. Using their results, we could take state a high-probability bound on the sample complexity of the resulting algorithm (instead of the expected sample complexity we bound here) where the second-order term will be in $\log\log(d)$. Tightening the second-order term under realistic assumptions on the arms is an interesting direction that we plan to explore in future work.
null
null
null
null
null
null
null
null
PDE-Transformer: Efficient and Versatile Transformers for Physics Simulations
Accept (poster)
Summary: This paper presents an enhanced diffusion transformer architecture through the integration of several established techniques, including multi-scale modeling and shifted window attention mechanisms. The improved model is subsequently applied to partial differential equation (PDE) solving tasks. Extensive experimental evaluations demonstrate the effectiveness of the proposed architectural modifications. Claims And Evidence: All claims are clear for me. Methods And Evaluation Criteria: The proposed method appears to primarily incorporate well-established techniques in deep learning, making the specific contributions less distinct. Additionally, the experimental evaluation employs relatively weak baselines. It would be beneficial to include comparisons with recent advancements in PDE foundation models and neural PDE solvers [1,2] for more comprehensive assessment. [1] Scalable Transformer for PDE Surrogate Modeling [2] PDEformer: Towards a Foundation Model for One-Dimensional Partial Differential Equations Theoretical Claims: This paper does not have any theoretical claim. Experimental Designs Or Analyses: The experiments and analyses are extensive and content-rich. The writing is clear and well-structured. Supplementary Material: I have briefly reviewed the experiments, implementation details, and visualizations provided in the supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: It is worth noting that UniSolver [1], a recent work also built upon the diffusion transformer architecture, addresses similar PDE generalization challenges. A detailed comparison and discussion with UniSolver should be included to highlight the distinctions and relative advantages of the proposed approach. [1] Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers Other Strengths And Weaknesses: The main concern is lack of novelty. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for the review and feedback. We want to address your remaining concerns in the following: **Stronger Baselines** While there can certainly be more baselines, we politely disagree that the baselines chosen are "relatively weak". For example, we compare extensively against scalable operator transformer scOT [1a] and UDiTs [2b]. They are both very strong and recent baselines (both published NeurIPS) for scientific machine learning and a SOTA diffusion transformer architectures. Thank you for mentioning the paper [3c], FactFormer, which is also a possible baseline, so we include it in our comparison of transformer architectures for a more comprehensive evaluation. See the following table for a comparison between FactFormer and PDE-Transformer (extending table 1 in the main paper): | Model | nRMSE(1) | nRMSE(10) | Time (h) | Params | GFlops | | -------- | ------- | ------- | ------- | ------- | ------- | | PDE-S | 0.044 | 0.36 | 7h 42m | 33.2M | 19.62 | | FactFormer | 0.069 | 0.65 | 38h 8m | 3.8M | 66.76 | PDE-S clearly outperforms FactFormer. Note that FactFormer has fewer parameters, since we were using the implementation by the authors of [3c] (from github). As a weight-computation tradeoff, we preserved the original architecture with fewer weights, but gave FactFormer a significantly larger computational budget. Hence, despite fewer parameters than PDE-S, FactFormer trains much longer and requires more than three times as many floating point operations. We believe this is fair comparison for FactFormer. The resulting performance of FactFormer is significantly lower despite the additional operations. For the PDEformer model [4d], we will include it in the related work. PDEformer only targets 1D PDEs and an extension to 2D PDEs seems nontrivial. Moreover, it constructs a graph using the target PDE, which is not always available in the general setup we are targeting. Similarly, Unisolver [5e] is mostly orthogonal to our work: it primarily focuses on conditioning on PDE specific information (PDE equation, boundaries, etc.) using language embeddings from LLMs. Note that both [4d] and [5e] are only available as preprints so far. [1a] Poseidon: Efficient Foundation Models for PDEs, https://arxiv.org/pdf/2405.19101 [2b] U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers, https://arxiv.org/pdf/2405.02730v1 [3c] Scalable Transformer for PDE Surrogate Modeling, https://arxiv.org/pdf/2305.17560 [4d] PDEformer: Towards a Foundation Model for One-Dimensional Partial Differential Equations, https://arxiv.org/pdf/2402.12652 [5e] Unisolver: Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers, https://arxiv.org/pdf/2405.17527 **Novelty** Even though PDE-Transformer combines improvements from different more established architectures in computer vision, the final architecture is novel and follows the paradigm “use what works best”. It gives SOTA performance on learning PDEs with significantly improved scalability. The modifications are carefully evaluated against the newest, similar SOTA transformer architecture. Additionally, the separate channel (SC) variant is not used in any previous work, and fundamentally improves the downstream performance. For building scientific foundation models, this is one of the most critical aspect: effective pretraining on large datasets so that finetuning on difficult new PDEs works. We have shown that finetuning a pretrained network works much better for the separate channel SC version than when mixing channels (MC). We believe this is an important empirical finding. We therefore kindly ask you to reconsider your overall recommendation, and we’d be happy to discuss any remaining open aspects.
Summary: The paper presents a transformer model called PDE Transformer designed to solve PDE (partial differential equations), therefore potentially allowing for physical simulations. The model is based on the diffusion transformer architecture (DiT), and as such, can be be trained not only for forecasting, but also for generation. To the best of my understanding, one of the main additions to the architecture is to use down- and upscaling tokens at the end of each transformer stage (instead of on the query- key-value tuple of the self-attention operation like it was introduced before in another version of the architecture) which allows for a much faster training time, and the application of the architecture to a PDE learning setting. The paper claims to beat SOTA architectures on a benchmark PDE forecasting dataset including multiple PDEs, and for so called “downstream tasks”: more complex PDEs with various boundary conditions on specific geometries. # Update after the rebuttal I appreciate the efforts made by the authors during the rebuttal and I think the paper is of overall good quality. I am maintaining my score as it is. Claims And Evidence: The claims are clear: the suggested model outperforms SOTA models for deep learning based PDE simulation. The evidence given is proper but not entirely convincing, considering the following points: - The number of SOTA models it is compared to for the main PDEs is quite limited: 3 models, only transformer based. Why not comparing to other types of PDE-learning models that are also quite efficient and always competing such as graph based models and neural operator based models (e.g. Message passing neural pde solvers from Brandstetter et al 202, MagNet boussif et al 2022, or FNOs, which I don't need to introduce) ? - I am honestly not sure why different SOTA models are used for the main PDE prediction tasks (trained from scratch) and the “downstream” tasks? They can all do both predictions, so we might as well see the performances of all models on all tasks! - Also, one of them (UDiT) is basically performing equally to PDE-Transformer, although it is slower at training time. The suggested model is therefore indeed faster, but not necessarily “outperforming” SOTA. Perhaps this should be highlighted in the main claims! - The configurations in which the model is tested is also quite limited: only one spatial resolution is used (and quite a corse one for typical PDE datasets: 256x256) and one time resolution (30 steps for the 600 different PDE trajectories) for a rather low horizon of 10 steps. I appreciated the studies on the patch and window sizes and supervision vs probabilistic learning, but I think time and space resolutions are of paramount importance for PDE learning (if I had to choose, I would put extra studies in the appendix and these on resolution in the main paper if it causes a space problem). Some models might be more efficient for longer horizon and/or higher spatial resolution! I think it would be more fair to do more experiments regarding various resolutions and conclude on the good and bad points of multiple models comparatively (For example FNO and variants are known to be quite good at zero-shot super resolution in both space and time tasks) - The same two previous comments can be said for the “downstream tasks”: 4 models are compared, including some models that are not adapted to strange geometries (which is mentioned by the authors, which I appreciate!). Similarly experiences with various resolutions would be interesting. - Note that I really appreciated the pretrained experiments on the downstream tasks, I think it is interesting and impressive; however, I did not understand if the other models were also retrained or only trained from scratch? If the latter, the comparison with the pretrained PDE-transfo is a little unfair. (Cf questions for more details) Methods And Evaluation Criteria: The proposed methods do make sense, as it basically builds on top of successful architectures, which may not be the most creative but makes total sense! The benchmark dataset is good and gathers many PDEs (APEBench). Perhaps it would make sense to show the typical multiple metrics for forecasting (MAE, MSE etc rather than only nRMSE ) in order to have a full view on the models performances. Also, more experiments would be needed to have a definite conclusion, as mentioned previously. Theoretical Claims: No theoretical claims are given in this paper. Experimental Designs Or Analyses: The experimental design seems perfectly sound, except that, as mentioned in the Claims And Evidence section, more experiments regarding spatial and temporal resolutions, higher time horizons and more SOTA models would bring more convincing evidence to the suggested model’s performance. Supplementary Material: The supplementary material gives details about training configurations, details about the PDEs used for downstream tasks, and prediction visualisations. These details are useful to have. However, I have a comment on the autoregressive predictions visualisations for the different PDEs: as much as I appreciate these (and they are quite impressive), right now we see only the predictions from the PDE-transfo model with the observed frames - it would be interesting and more informative to see also some of the SOTA’s predictions visualisations in comparison, side by side with the observed frames. Relation To Broader Scientific Literature: The contribution of the paper seems to be in terms of pure performance in the PDE learning domain, as it competes with state of the art while being apparently faster to train than some SOTA PDE models. In terms of pure architecture, it is however not very original, as it is basically the UDiT architecture but with up and downsample tokens added at the end of transformer stages instead of the attention dot products themselves, applied to a PDE forecasting task. It can be seen from the performance table that the only advantage of the PDE transformer over UDiT is the training time, but the performance is extremely close. Essential References Not Discussed: Relating to the SOTA models previously mentioned in the review, it appears that the authors do not mention PDE learning models based on graph neural networks, while it was of very high significance in the domain at some point, particularly for their super resolution and irregular mesh learning capabilities (and I believe they still are). It is of course necessary to compare to recent transformer models, but at the end of the day all significant SOTA models should be considered, regardless of the architecture type. Such graph based models include for example: Message passing neural PDE solvers: Brandstetter, J., Worrall, D., & Welling, M. (2022). Message passing neural PDE solvers. arXiv preprint arXiv:2202.03376. MAgNet: Boussif, O., Bengio, Y., Benabbou, L., & Assouline, D. (2022). MAgnet: Mesh agnostic neural PDE solver. Advances in Neural Information Processing Systems, 35, 31972-31985. I am not saying you need to compare to all kinds of models that exist, that would be impossible! But do believe the SOTA models used in comparison needs to include some of these graph based models, in terms of pure forecasting performance, but also super resolution abilities and training time (since it is one of the biggest advantage of the model). Other Strengths And Weaknesses: Strengths: - The paper is well written and well presented, and rather clearly explained. - I appreciate - the practical idea of using up and downsample tokens in between transformer stages instead of in attention computation to make it more time efficient and - The idea of using a diffusion model to the PDE learning domain (as it seems to work pretty well!) Weaknesses: The main weakness is therefore the originality of the paper, given than its contribution is mainly residing in a very slight practical change in an already existing architecture and the application of the architecture to PDE learning. This could be interpreted as a “real-world” use case since PDE learning can be directly applied for physical simulations, however the paper does not really take it this far, by doing actual physical simulations (combining potentially multiple PDEs) and rather tries it directly on PDE data. I do believe it is still a useful contribution, but then I think it would greatly benefit from stronger experiments (on multiple spatial and time resolution experiments, as mentioned previously) to counter balance the lack of contributions in other aspects. Other Comments Or Suggestions: - Is the idea of “expansion rate” really useful? It seems to me that it only illustrates the tradeoff between better performance/scalability obtained with lower/higher patch(token) size, which is already known! Also, you are mentioning it in the abstract but you do not define it there but only later in the article. If you want to keep presenting this notion, please define it in the abstract before mentioning it or dont mention it! - When the nRMSE is defined, a dependency to horizon time could be added (meaning defining it as nRMSE(t) ) since it is presented like that in table 1, to be more coherent with the notations. Questions For Authors: - Note that I really appreciated the pretrained experiments on the downstream tasks, I think it is interesting and impressive; however, I did not understand if the other models were also retrained or only trained from scratch? If the latter, the comparison with pretrained PDE-transo is a little unfair, and it would be interesting to see the performance of these models pre-trained as well in comparison. - In addition to additional experiments, I think it would be interesting to consider super resolution experiments, in space and/or time, as I think it is very important in PDE learning (the ability to have a higher resolution for the prediction, based on coarser resolution models like global weather forecasting models, is more important to me than a faster training time - once a model is trained in real life, what matters is rather the inference time), and since one of the bigger contribution is basically the application of a known architecture in PDEs, it would be significant to see if it compares to neural operators models and graph neural networks models which are both adapted to these tasks (using respectively implicit functions and mesh agnostic nodes) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the positive review and feedback. **Pretraining on Downstream Tasks** Baseline models are not pretrained. We have a version of our own model that is trained from scratch and one that is pretrained. In all cases, our model trained from scratch is better than the baselines and performance is further improved if it is pretrained, so we think that's fair for a comparison. We also compare with Poseidon (scOT-B, see figure 9 in the appendix) which is pretrained by the corresponding authors on their own large pretraining dataset. **Superresolution experiments** In principle, PDE-Transformer can be trained on a coarse resolution and then applied to high-resolution data later. This is tied to both how the attention operation can be viewed as an integral kernel operation (similar to many neural operator papers). Within each attention window, a 2D grid with relative positions is used as an input to a MLP that outputs position-dependent attention scores. We can modify the relative position grid and window size to obtain a model that operates at an increased resolution. Because of two central modifications that improve the scalability of the architectures, super-resolution without finetuning is difficult to achieve. Those two modifications are patching and the down- and upsampling of tokens within the multi-scale architecture, which both assume data to be at a fixed resolution. Based on your suggestions, we have evaluated PDE-Transformer trained at resolution 256^2, modified the window size and evaluated the modified model at resolution 512^2. While results still showed the correct dynamics, the accuracy was affected. When finetuning PDE-Transformers to new resolutions we expect the proposed architecture to generalize well and achieve a high performance. **Graph-based models** We agree that graph-based models are an important baseline for unstructured meshes. For regular grids, previous work has shown that graph networks have no advantages, but only require a slightly larger weight count compared to networks that leverage the inductive biases from the grid. See e.g. [2] for a comparison. Nonetheless, we have started to train Message Passing Neural PDE Solvers (MPNN) on the pretraining dataset and plan to add a comparison in the updated manuscript. In the MPNN paper, the 2D experiments only involved grids of size 32^2. For the 256^2 resolution, training has started, but takes several days to finish, because of much bigger memory and compute requirements. Analyzing the architecture, a clear conclusion to draw is that graph networks like MPNN are not designed to be scaled up to large-scale tasks. Due to the required training time, we can only report very preliminary results (epoch 2/100) for the rebuttal. | Model | Epoch | nRMSE(1) | nRMSE(10) | Time (h) | Params | GFlops | | -------- | ------- | ------- | ------- | ------- | ------- | ------- | | PDE-S | 100 | 0.044 | 0.36 | 7h 42m | 33.2M | 19.62 | | MPNN | 2 | 0.283 | 1.07 | 37h 2m (1 GPU) | 2.44M | 396.16 | As mentioned above, we plan to target unstructured grids and datasets as future work, and will make sure to compare to graph networks more extensively for these cases. [2] Differentiability in Unrolled Training of Neural Physics Simulators on Transient Dynamics, https://arxiv.org/pdf/2402.12971 **Different models for pretraining/downstream tasks**: We considered different models for pretraining and the downstream tasks. First, we wanted to focus on transformer architectures that are more similar to our architecture for pretraining. For the downstream tasks, we have more "standard" scientific ML baselines. It would be possible to train every model on all tasks, giving an even more extensive comparison, but we believe that grouping models (transformer/scientific ML) is a fair approach that demonstrates relative performance differences. There is one architecture (scOT) that fits in both categories and it was included in both pretraining/finetuning tasks. We included an additional SOTA transformer model, FactFormer. See our response to reviewer yvZY. **UDiT-S better than PDE-S?** In Table 1, UDiT-S is *slightly* better than PDE-S for nRMSE(1). Still, in almost any case it is better to use PDE-S. The different configs S,B,L are not always directly comparable between model architectures, e.g. UDiT-S has more parameters than PDE-S. The performance of transformer models is known to increase when scaling the model size, so a "larger" UDiT-S can beat a "smaller" PDE-S. The B config PDE-B trains much faster than UDiT-S (10h40m vs. 18h 30m) but now clearly beats UDiT-S in terms of nRMSE (0.038 vs. 0.042). Inference speed is correlated to training time. UDiT does not scale well for higher resolutions as demonstrated in figure 3. We will improve the paper based on your many other helpful suggestions. We had to keep our answers short due to the strict rebuttal character limit. We are happy to continue in more detail during the discussion phase.
Summary: The paper presents PDE-transformer, a new transformer approach trained simultaneously on different physical systems. They propose an alternative to the DiT architecture that is suited for PDEs. Specifically, they use a multi-scale architecture and a shifted-window attention to prevent from the quadratic complexity. They propose also a separate channel strategy when dealing with physical systems with different number of physical channels. They evaluate their method on a variety of PDEs with specific behaviors and complex PDEs for finetuning, where they show competitive or sota performance. ## update after rebuttal The authors proposed a nice approach for multi-task PDE solving. There are some results that are surprising to me, that have been mentioned in the rebuttal. Overall, the paper is of good quality and thus I keep my score as it is. Claims And Evidence: Yes Methods And Evaluation Criteria: They used a variety of datasets for their experiments and also tested on very hard datasets for some additional studies, showing that using a pretrained transformer on more simple datasets can help for downstream tasks such as finetuning on very complex PDEs. Theoretical Claims: No theoretical claims Experimental Designs Or Analyses: I did check the different experimental designs and analyses and did not find a particular issue. However, I am a bit concerned by the analysis made on the diffusion training. They are known to correctly model complex distributions, which could be the case when pretraining a model on a variety of PDEs. Supplementary Material: I review the supplementary material, notably the additional results and and the dataset section. The appendix is particularly smooth and simple. Relation To Broader Scientific Literature: The authors proposed a model in the line of the existing foundation models for PDEs literature. Essential References Not Discussed: The authors discussed the main related works Other Strengths And Weaknesses: The paper tackles an important problem, it learns a single model for solving multiple physical systems. It exhibits strong performance on a diverse number of datasets and have been tested on very complex datasets in a downstream task, showing strong performance. The paper is clear and I particularly liked the appendix with all the dataset descriptions. The originality of the paper is a bit weak, it extends existing works from computer vision with DiTs for scientific problems. Using U-shape transformers is not novel, but it's application for PDEs with a multi-scale approach makes sense. One potential weakness of the paper is the restriction to flexible grids, which should be a key property of neural solvers. I think it could be easily extended to handle irregular grids. Other Comments Or Suggestions: No more comments. See above sections. Questions For Authors: I am bit concerned by the conclusion made with MSE training compared to diffusion training. I don't really see why MSE training could yield superior performance, notably when modeling complex distributions such as multi-physics data. In my opinion, diffusion training should exhibit superior performance. Could you elaborate more on why MSE training is superior here please. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the positive review and feedback. **MSE vs. Diffusion** This is a very important and interesting issue. We believe that scientific machine learning models should in general be capable of learning the full posterior; however how useful this is still depends on the specific task and the underlying data distribution. When training a neural solver for example, then the mapping that the neural solver learns is deterministic when given simulation hyperparameters and initial conditions. Another way to phrase this is that given a point in the input space (representing the simulation hyperparameters and initial conditions), then this is mapped to a single point in the output space (the solution field at a certain time). In this situation, supervised training is a suitable choice, especially if we use a metric based on the MSE/L2 distance for evaluation and training. This situation changes, when the mapping becomes probabilistic, for example when the exact simulation hyperparameters are unknown. This case more closely resembles the setup in the paper. Then, instead of learning a deterministic mapping, we aim to learn a mapping that transports a single point in the input space to a distribution of possible solutions in the output space. In this case, we can learn the mapping via a diffusion model. What happens when we train a diffusion model, but use the common MSE/L2 distance for evaluation? For simplicity, let's say that this distribution is approximately Gaussian. We only have *samples* from the target distribution that we can compare to. At the same time, our diffusion model will draw a sample from the learned target distribution, which will match the target distribution if our model is trained well. Now, we evaluate the MSE/L2 distance between the sample from the target distribution and the sample from the "learned" distribution. Let's denote this value by the random variable X. One factor plays a key role now: if we use a metric based on MSE/L2 distance, then it is not optimal when the diffusion model learns the full posterior. In fact, it would be better, if the model just learned to predict the *mean* of the posterior, because that will decrease the mean and variance of X. If we use supervised training, the network will learn exactly the mean of the posterior. To summarize, if we use metrics based on MSE/L2 distance for the evaluation, then in theory they will favor supervised models over diffusion models, because it is just not optimal and incentivized to learn the full posterior for these metrics. Of course that does not mean that the supervised models are better than diffusion models. In fact, we advocate for using diffusion models. However, if the main objective is based on MSE, then it is very difficult to beat a supervised model in terms of the accuracy-compute tradeoff. For more complicated data domains and tasks, we should always use expert knowledge and metrics that make sure the full posterior is considered. Then we can reasonably show the advantages of diffusion models over supervised training. In this paper, we use nRMSE, because of the large number of different PDEs and because there is no single other metric that would work well for all PDEs. For nRMSE, we see that the diffusion models improve significantly when increasing the number of parameters from config S to L and when increasing the number of steps used for inference, closely approaching the supervised baseline. **Generalization to Unstructured Meshes** It's possible to generalize the the architecture to irregular grids, which however this is not the focus of the current paper. - One straightforward approach is to couple PDE-Transformer with GNO layers as encoder/decoders (instead of the patchification) that map from a given geometry to a latent regular grid as used in e.g. [1] - Alternatively, it is also possible to generalize the notion of attention window to be defined on local neighbourhoods. Correspondingly, token up- and downsampling are replaced by respective graph pooling operations. This requires more fundamental changes to the architecture making it a type of graph neural network. Both approaches are planned as directions of future work. [1] Geometry-Informed Neural Operator for Large-Scale 3D PDEs, https://arxiv.org/pdf/2309.00583 Please let us know if this could clear your open questions. We are happy to discuss more.
Summary: This paper introduce PDE-Transformer, a transformer made for PDE data being able to be incorporated in a supervised learning task or in a diffusion model. The model is extensively tests on various benchmarks. Claims And Evidence: Yes, the claims seem supported by convincing evidence Methods And Evaluation Criteria: The authors benchmark their models on many datasets and study the transfer properties to datasets unseen during training. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: The experiments are extensive and look good to me. Supplementary Material: I skimmed through the SM. Relation To Broader Scientific Literature: The contributions of the paper are a good progress towards better architectures in the field. Essential References Not Discussed: None that I can think of. Other Strengths And Weaknesses: Strengths: - The paper is very dense with many different ideas. All these ideas are supported with numerical experiments. - The results are convincing. - I like the downstream tasks study. Weaknesses: - The model is currently limited to 2D regular grids - The paper is very dense and makes the architecture not 100% clear in 1 read. - The fact that the model can be use in supervised and diffusion training gives me mixed feelings: it is a good idea, but overall, what does a diffusion models additionally bring here? It doesn't seem to be discussed. - From the conclusions of Table 3, it is not clear to me which one of MC or SC is better. Other Comments Or Suggestions: - Line 158 (right paragraph) should be $S$ is and not S. - Line 210 (r.p.) should be separate (typo) Questions For Authors: - Can the authors comment on the use of bf16 precision? This seems like a bold choice to me for this type of data which is usually in fp32. With bf16, don't you lose a lot of precision, especially for autoregressive tasks? - One clear advantage of using diffusion models here would be that we would obtain probabilistic samples. It could lead to uncertainties in the prediction, very valuable for scientific machine learning. Can the authors comment on that, and possibly suggest 1 experiment to study this (not mandatory, but I think it would strengthen the paper)? - There are some unclear terms in the text, what is token upsampling/downsampling? PixelShuffle/unshuffle? - Are images just patchified? No MLP/convolutions? Do the authors know whether their better performance is because they are working in pixel space? - Why did the author not compared with other foundation models like MPP and Poseidon? If the weaknesses and questions are satisfyingly answered, I am willing to modify my score accordingly. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the positive review and feedback. **BF16 mixed precision** That's a good point. With BF16 mixed precision we lose precision compared to FP32, but can train a lot faster. In our experiments, we did not see a difference in the evaluation metrics and training loss when switching between BF16 mixed precision and FP32. Note that in practice we can always train first with the faster BF16 mixed precision until the training loss converges and then finetune with full FP32 precision. **Diffusion models** In our opinion, scientific foundation models should be designed with the possibility in mind to produce probabilistic samples. Uncertainty estimates are especially useful when dealing with turbulence. Direct numerical simulation is very expensive, so more simplified turbulence models such as Reynolds-averaged Navier-Stokes simulation and large eddy simulations are still prevalent in the engineering community. However, these models produce posterior distributions, because of unknown latent parameters of the models. As an example of an application that requires the full posterior, see [1] which learns the supersonic flow around aircraft airfoils using diffusion models. Even more computationally challenging is turbulence in 3D, which can be phrased as a generative modeling task to learn all possible turbulent flow states [2]. Scaling 2D models to 3D requires very computationally efficient 2D models to begin with, which we focus on in this paper. We are working on our follow-up work, which addresses efficient scaling of our transformer architectures to high resolutions in 3D. [1] Uncertainty-aware Surrogate Models for Airfoil Flow Simulations with Denoising Diffusion Probabilistic Models, https://arxiv.org/pdf/2312.05320v1 [2] From Zero To Turbulence: Generative Modeling for 3D Flow Simulation, https://arxiv.org/pdf/2306.01776 **Token Upsampling/Pixel Shuffle** Token downsampling refers to an operation that merges multiple (4) tokens into a single token. Token downsampling can be implemented for example via PixelUnshuffle, which is part of the Pytorch library (see torch.nn.PixelUnshuffle of the official Pytorch documentation). Respectively, there is token upsampling, which splits a single token into multiple (4) tokens, which can be implemented with PixelShuffle. **Patchification** Our implementation uses a single convolutional layer for the patchification where the kernel size and stride are set to the patch size, which is an efficient standard implementation of the patchification. We have performed experiments with extending this single convolutional layer for patchification to small convolutional encoders/decoders. However we didn't see any improvements here and using a pure transformer backbone with a single patchification layer was optimal for us. Our transformer also works in pixel space directly. In principle it can be easily coupled with additional encoders/decoders. However, doing this needs to be carefully engineered. In computer vision, transformer architectures such as the Diffusion Transformer, are mostly coupled with pretrained VAEs and work in a reduced latent space. For scientific machine learning, a pretrained VAE is often problematic, as it makes it difficult to achieve low MSE values. **Which is better: MC or SC?** For the pretraining tasks, we are in a situation where there are very large amounts of simulation data to train on. In this case, the mixed channel (MC) and separate channel (SC) variants achieve the same accuracy, however SC requires more computation. In this case, MC still wins the accuracy-compute tradeoff. For finetuning, there are domain-specific applications, where data is more valuable and scarce. Here we see that when trained from scratch, MC generalizes better than SC. However, when we don't train from scratch and finetune pretrained networks, then (1) results always improve for both SC and MC and (2) SC now clearly beats MC and shows much bigger improvements from pretraining. This is because the SC version is better at transferring what it has learned to new types of simulations. Thus for typical use-cases of finetuning a pretrained network, SC is clearly preferable. **Comparison with Poseidon/MPP** We did in fact compare our model extensively to Poseidon. The model from Poseidon is the scOT model we compare to in all experiments. We also compare to the pretrained model, Poseidon-B for the downstream tasks. Because of space limitations, we could not include a full comparison in the main text, but it is discussed and we refer to figure 9 in the appendix. We saw Poseidon as the most recent and "toughest" competitor, so we wanted to make sure we have extensive comparisons with it and show that we can outperform it. MPP is already used as a baseline in the Poseidon paper and Poseidon achieves much better performance there, so our priority was to focus on Poseidon. Please let us know if this could clear your open questions. We are happy to discuss more.
null
null
null
null
null
null
DyCodeEval: Dynamic Benchmarking of Reasoning Capabilities in Code Large Language Models Under Data Contamination
Accept (poster)
Summary: This paper proposes DyCodeEval: a dynamic benchmarking approach designed to evaluate the reasoning capabilities of Large Language Models on code tasks under potential data contamination. By starting with a seed programming problem, DyCodeEval leverages multiple agents to extract and modify problem contexts—without altering the core logic—to generate semantically equivalent variations. Claims And Evidence: The motivation for handling data contamination is well-founded. Traditional benchmarks, such as HumanEval and MBPP, may have been seen by models during training; consequently, their results could reflect memorization rather than genuine reasoning ability. The authors’ core idea—using a dynamic process to revise benchmark problems—is promising for mitigating contamination issues. Still, further elaboration on how they quantitatively measure contamination levels would be helpful. The paper mentions that prior contamination metrics may not align with real-world cases; clarifying the proposed metric or methodology for gauging contamination would strengthen the claim. Methods And Evaluation Criteria: Benchmark Selection: The work primarily focuses on HumanEval and MBPP—two standard code generation datasets that have been around for a while and on which models often perform quite well. It would benefit readers to see experiments on more recent datasets or tasks not so heavily covered in prior training data. Additionally, expanding beyond code completion tasks to more diverse or “harder” datasets could further validate DyCodeEval’s utility. Plan for More Diversity: While the authors mention multiple agents generating various problem contexts, the paper could better highlight how these newly created tasks genuinely probe the model’s reasoning rather than just superficial text changes. If the final results remain similar in difficulty and performance, additional clarity on whether the plan’s quality improves over iterative changes is needed. Theoretical Claims: The proofs follow standard probability and combinatorial arguments. One potential point for further clarity might be to highlight assumptions of “uniform sampling” in each theorem (i.e., that scenarios and contexts are chosen with equal probability), as real-world usage could introduce slight biases. Experimental Designs Or Analyses: The authors suggest that DyCodeEval mitigates data contamination by dynamically generating semantically equivalent but contextually distinct problems. Beyond final accuracy, incorporating additional metrics (e.g., measuring plan complexity, solution clarity, or partial correctness) could offer a more nuanced view of LLM reasoning skills. Similarly, comparing performance on newly generated problems that are known to be uncontaminated against older, potentially contaminated benchmarks would illustrate the impact of this approach more concretely. Supplementary Material: yes Relation To Broader Scientific Literature: Robustness Code LLM Evaluation Data Contamination Essential References Not Discussed: no Other Strengths And Weaknesses: see above Other Comments Or Suggestions: see above Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable comments. > **How Our Dynamic Metric Mitigates Data Contamination** For the static metric **pass@k**, the same fixed problem prompt is fed to the LLM multiple times, leveraging its sampling capability to generate different outputs. However, since this prompt is publicly available and remains unchanged, **pass@k** becomes unreliable if the prompt is contaminated in the LLM’s training data. In contrast, our proposed **DivPass@K** generates multiple randomized problem mutations using our approach before feeding them to the LLM. These dynamically generated prompts are not static, publicly available, or present on the Internet, reducing the risk of data contamination. We will clarify the working mechanism of our dynamic metric in the final version. > **More challenging and uncontaminated benchmarks** We also applied our approach to LiveCodeBench, a newly collected competition-level programming benchmark sourced from LeetCode and other platforms. The results, shown in the table below, demonstrate that DyCodeEval can be effectively applied to challenging and uncontaminated benchmarks. Notably, Qwen2.5-Coder’s accuracy did not drop as significantly as it did from HumanEval to LiveCodeBench. This is because LiveCodeBench was released after Qwen2.5-Coder, reducing the likelihood of data contamination. | Model | LiveCodeBench | LiveCodeBench +Ours | | ---------------------------- | ------------- | ------------------- | | CodeLlama-13b-hf | 21.4 | 18.3 | | CodeLlama-7b-hf | 15.6 | 13.5 | | DeepSeek-V2-Lite | 41.4 | 39.4 | | Llama-3.1-8B-Instruct | 21.3 | 20.8 | | Qwen2.5-Coder-7B-Instruct | 39.4 | 36.5 | | deepseek-coder-1.3b-instruct | 22.1 | 19.3 | | claude-3.5-haiku | 59.4 | 60.3 | | claude-3.5-sonnet | 67.7 | 67.6 | > **Uniform sampling in our assumption** We will revise our theorems and clarify that they rely on the assumption of uniform sampling. > **Other evaluation metrics** Besides the correctness metric, we also consider the test case pass rate to evaluate partial correctness. The results are shown in the following table. We observe that, after applying our transformation, the test case pass rate increases for some models. This is because our transformation generates diverse variants of the problem, which may change the models reasoning and make it poential to get a partial correct solution. | Model | HumanEval | HumanEval + Ours | | ---------------------------- | --------- | ---------------- | | CodeLlama-13b-hf | 0.38 | 0.37 | | CodeLlama-7b-hf | 0.29 | 0.33 | | DeepSeek-Coder-V2-Lite-Base | 0.19 | 0.22 | | DeepSeek-V2-Lite | 0.29 | 0.21 | | Llama-3.1-8B | 0.38 | 0.39 | | Llama-3.1-8B-Instruct | 0.63 | 0.56 | | Llama-3.2-1B | 0.20 | 0.15 | | Llama-3.2-3B | 0.31 | 0.33 | | Qwen2.5-7B | 0.55 | 0.46 | | Qwen2.5-7B-Instruct | 0.63 | 0.56 | | Qwen2.5-Coder-7B | 0.65 | 0.37 | | Qwen2.5-Coder-7B-Instruct | 0.76 | 0.71 | | deepseek-coder-1.3b-instruct | 0.54 | 0.43 | | claude-3.5-haiku | 0.86 | 0.78 | | claude-3.5-sonnet | 0.96 | 0.85 |
Summary: This paper proposes a framework for augmenting existing coding model evaluation datasets by coming up with new scenarios and contexts to generate semantically similar evaluations. The authors use several LLM steps to produce these questions, and evaluate models while attempting to simulate data contamination. The authors compare their generated evaluation questions against other forms of manipulation, and analyze how the performance of a set of models changes. Claims And Evidence: In 234-244, the authors claim that their collection of 3 models (Llama 3.2-1B, Llama 3.2-3B, DeepSeek-Coder-1.3B) is diverse "in terms of model architecture, model size, and training methods). This is problematic, as they explore a truly diverse set of models in the following section (12 additional models). Of the 3 original, 2 are the same model family, and all 3 are small models. Furthermore, the abstract claims that their method creates "semantically equivalent variations", but this does not seem to be validated. The authors claim to perform a human study but do not include details to verify further. Methods And Evaluation Criteria: The benchmarks used make sense. However, much of the evaluation does not make sense. There are insufficient details on how the authors finetune models to simulate leakage. Fine tuning on a small subset of documents could heavily impact the instruction following capabilities of these models if not considered carefully. Furthermore, training directly on (synthetic) evaluation questions does not simulate pretraining leakage, where implementations might be contained within a larger dataset. While their capabilities are more limited, there are open data code models that the authors could use for an exact leakage study (Starcoder). Theoretical Claims: This paper includes unnecessary theorems and claims to analyze the collisions of their method when generating $|S|$ scenarios and $|C|$ coding problem contexts. This analysis and the page of proofs in the appendix appear to be typical statements of balls-and-bins, coupon collector, and hash table collision type problems. These do not add to the substance of the paper except to include more notation and an appendix section. This also does not address the fact that any of the prior LLM steps in this work could simply be repeated or resampled (with temperature, line 252) in the event of a collision. Experimental Designs Or Analyses: Details around finetuning are underspecified. 4.4 does not give details about mutations used for comparison. All analysis is harmed by the fact that many of the figures have extremely small text, cut off labels, or overlapping content. The details of the human evaluation are extremely under specified. The appendix simple states "the consistent rate is around 95%" with no further numeric details. Supplementary Material: Yes, I reviewed the prompts and proofs in the appendix. Relation To Broader Scientific Literature: This paper studies leakage and analyzes common benchmarks and code models, largely following established literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. Interesting approach to augmenting evaluations 2. Considers that several of the main evaluations in this area may suffer from leakage Weaknesses: 1. Analysis is very difficult to understand, compounded by the fact that figures contain extremely small text or overlapping content. 2. Some essential experimental details are not considered (see above) 3. Paper spends substantial content on explaining a series of straightforward LLM prompts as agents. 4. Similarly include sections that do not contribute to the main points (Algorithm 1 is a typical process, section 3.3 does not add to the content). This space would be better used for explaining missing details. 5. Makes claims about semantic equivalence between generated problems but does not sufficiently justify. Other Comments Or Suggestions: Typo in the first line of the abstract 763, 765 further typos. Typos in your prompts 676 Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thanks for reviewing our paper and valuable comments >**Semantically Equivalent Validation and Human Study** By “semantically equivalent variations,” we mean that the generated problems can be solved by the same code solution as the original. We validate this through: 1. **Automated Validation:** A large language model (LLM) acts as a probabilistic oracle to check: - Whether the rewritten problem retains the original meaning. - Whether the original solution remains valid for the new problem. 2. **Human Verification (Appendix D):** Two graduate students independently reviewed *N = 30* problem pairs per dataset (60 total), assessing whether the core algorithm and complexity were preserved. They initially disagreed on three pairs but reached consensus after discussion, yielding a 95% agreement rate. We provide the sampled data on our [website](https://github.com/anonymousGithub2022/DyCodeEval/blob/main/resource) (see four CSV files in the directory). >**Finetuning details** We fine-tune the selected code LLM on randomly sampled portions of the benchmarking dataset, ranging from 25% to 100%, using a standardized instruction tuning objective. The fine-tuning process employs a learning rate of 5e-5, a batch size of 8, and runs for 20,000 steps. We acknowledge that fine-tuning on a small subset can impact the instruction-following capabilities of the model due to overfitting. However, this phenomenon is precisely the risk posed by data contamination—overfitted models exhibit artificially high performance on contaminated benchmarks, creating a false sense of intelligence while sacrificing generalizability. This issue is empirically demonstrated in Figure 4 (first row), where the red bars highlight the inflated accuracy due to overfitting, while the blue bars indicate the degradation in general capabilities. The presence of such overfitting underscores the need for contamination-free benchmarking, which is the primary motivation of our work. Regarding pretraining leakage, we note that instruction fine-tuning can override or mitigate the effects of pretraining data exposure. Our study does not aim to analyze pretraining leakage directly but rather to simulate its effects in a controlled manner. To achieve this, we follow established methodologies in the literature, where instruction-tuning-stage leakage is widely used to approximate the impact of training data contamination. This approach allows us to systematically examine how leakage-induced overfitting distorts benchmarking results. >**Unnecessary theorems** We strongly believe that these theorems are both necessary and valuable. They provide probabilistic guarantees to benchmark providers, ensuring that an entity with ulterior motives cannot easily overfit a model to achieve artificially high scores on our benchmarks. Thanks to our hierarchical transformation framework, we can control the search space at each transformation layer, effectively mitigating the risk of collisions. This approach allows us to maintain a manageable search space at each layer while achieving a significantly large total search space. The claim that *"prior LLM steps could simply be repeated or resampled (line 252) in case of a collision"* does not apply to our scenario. Benchmark providers keep their scenario and context pools private to prevent manipulation rather than expose them for adversarial exploitation. As these pools act as a private key, our framework ensures transparent benchmarking while minimizing contamination risk. While brute-force overfitting is possible, our theorems show it would require an impractically large number of trials, making it infeasible. To highlight the advantages of our hierarchical transformation for reducing the rist of collision. We conduct a empirical evaluation, the setup and results are shown on our [website.](https://github.com/anonymousGithub2022/DyCodeEval/tree/main?tab=readme-ov-file#collision-results) >**Baselines in Sec4.4** We did describe and cite the baseline methods in Section 4.4. Below, we provide further details on the specific mutations: - **Token Mutation**: Randomly replaces a token in the original prompt with another token. - **Char Mutation**: Randomly inserts a character at a random position in the original prompt. - **Func Mutation**: Changes the function name style in the prompt, e.g., renaming "MyMethod" to "my_method." - **Insert Line**: Randomly inserts blank lines in the original prompt. - **CommSyntax**: Modifies the syntax of comments in the prompt (e.g., changing `# comment` comments to `"""comment"""` style). These mutations are derived from the robustness-based mutations proposed by Wang et al. (2023). Additionally, **PPM** (Chen et al., 2024) concatenates the original problem description with a newly defined problem description to test robustness. We used publicly available implementations to ensure consistency and reproducibility.
Summary: The paper presents a method for modifying existing LLM coding benchmarks through a 4-stage pipeline to produce new versions of the benchmark that are unlikely to have appeared in training data. This addresses the challenge that LLM developers face when collecting training data and evaluating their models: that data from their evals may appear in the training corpuses, and removing it is non-trivial. The pipeline consists of a scenario proposer, context generation agent, prompt rewriter, and validation stage. The paper gives theoretical consideration to the possibility of collisions in the task rewrite process. Then they employ the proposed process with two small coding benchmarks; to evaluate it, they consider how model performance changes when different amounts of data are leaked from the benchmark. They also examine the performance of a number of in-the-wild models on the original static benchmark and the new dynamic one, identifying that overfit models struggle on the new benchmark, and hypothesizing that QWEN2.5-CODER-7B may have data contamination. The paper also performs evaluations of diversity of the generated tasks, stability of the benchmark (in spite of its randomness), and evaluates whether weaker but cheaper language models can be used for task generation. Finally, the paper also introduces a new metric for their dynamic benchmark, DivPass, showing evidence for this metric being more reflective of a models' coding reasoning capabilities than the standard pass@k. Claims And Evidence: First the paper introduces a method for transforming an existing benchmark (HumanEval and MBPP, two standard, albeit small and "toy" compared to the problems LLMs are commonly used on today, benchmarks for LLM coding capabilities) into a variation of the benchmark unlikely to have been seen in the training data. It then makes the following claims about the method: 1. First it theoretically places bounds on the likelihood of collisions of the scenario and context of the generated task. 2. Then it measures the effect of various amounts of contamination on the benchmark results, finding the dynamic benchmarking is resistant to contamination. This is the main result of the paper. 3. When benchmarking in the wild models on the new benchmark, the paper reports model performance and finds evidence that Qwen2.5-Coder-7B may be contaminated. We discuss claim 1 in the theoretical claims section below. The evidence for claim 2, that the dynamic benchmarking is resistant to contamination, is persuasive: the evidence in Figure 4 is clear. A limitation of this evidence is that the models were merely fine-tuned, not pre-trained, on the contaminated data. The claim (3) that overfitted models appear as outliers (Figure 5) is reasonable, but there is not hard evidence that Qwen2.5-Coder-7B; the language used for this claim is appropriately couched. The introduction of the DivPass metric and the measurement of DyCodeEval's stability is a welcome contribution as well. The stability is strong enough (Figure 6) to make this approach trustworthy as a benchmark even when different random tasks are generated at each application. Finally there is one additional claim that using weaker LLMs for task generation leads to modest degradation in the quality of eval (the consistency rate from the validation stage of generation drops). This is persuasive to me that Haiku is indeed insufficient as a choice of model for task generation, while Sonnet is sufficient (at least for the seed benchmarks selected in the paper). Methods And Evaluation Criteria: The main method of benchmark generation (scenario proposer, context generation, prompt rewriting, and validation) is a sensible way to generate new tasks unseen by the model during training time, even if the original (or a different derived) version of the benchmark leaked into the training set. One limitation of this method concept is that the core solutions (i.e. the algorithmic insights that the solutions lean on) will still have leaked into the training set, but they will be heavily disguised. With today's models, that scenario/context "disguise" may be more significant that with future models. A limitation of the method is its reliance on a human verification step, which limits the ability to scale up the method. To evaluate the method the authors measure the stability of the resulting benchmark using HumanEval and MBPP as the seed benchmarks, they measure the benchmark's robustness to (a fine-tuning based approximation of) data contamination, and they measure in the wild performance and speculate about data leakage in real models. They also measure stability to regeneration of the benchmark. These are each quite sensible measures of evaluating the benchmark generation method. The seed benchmarks are limited in scope. The robustness to data contamination experiment only uses fine-tuning, not pre-training, which limits the conclusions we can draw. The true data leakage information about Qwen isn't known, so our ability to measure the benchmark's true leakage-detection ability is limited. But overall these experiments are compelling, demonstrating that (at least for these simple seed benchmarks and today's models), the approach is sound for generating data contamination resistant variations of the benchmark that are robust to regeneration. Theoretical Claims: The theoretical claims are correct given the assumptions stated and the proofs are sound. However, the theorems don't actually tell you what you might think/hope they do on a first read. The theorems correctly bound the probability of a collision between two generated examples provided that the scenario generator generates |S| distinct scenarios, and for each scenario the context phase generates |C| distinct contexts. In generating the |S| scenarios, however, there is the possibility of a collision or near collision (i.e. two very similar scenarios). Similarly when generating contexts conditioned on a scenario, there is the possibility of a collision or near collision. i.e. S or C could contain near-collisions, and the likelihood of this seems more significant that the values given by the bounds. If indeed we are only concerned with exact collisions, the randomness in the rewriting phase reduces that likelihood considerably. Are there empirical values that would make sense to show for these bounds? E.g. you could demonstrate what dataset sizes admit what amounts of generated examples without collision. Note the notation used in theorems 2 and 3 is missing the bars left of S. Experimental Designs Or Analyses: See notes in Claims and Methods sections. Supplementary Material: Yes, I have reviewed the full supplementary material. Relation To Broader Scientific Literature: There are many code generation benchmarks for LLMs, of which HumanEval and MBPP are two examples. However, the literature routinely recognizes data contamination as a challenge for properly evaluating LLMs. Generalization or Memorization: Data Contamination and Trustworthy Evaluation for Large Language Models Yihong Dong, Xue Jiang, Huanyu Liu, Zhi Jin, Bin Gu, Mengfei Yang, Ge Li https://arxiv.org/abs/2402.15938 This paper proposes a method to mitigate the challenges of data contamination by using LLMs to modify existing benchmarks. It stands in contrast to other data contamination mitigation approaches like non-LLM rewrites of existing benchmarks (see next section of review) and manually curated time-cutoff benchmarks like LiveCodeBench. Essential References Not Discussed: Please also see Is Your Benchmark (Still) Useful? Dynamic Benchmarking for Code Language Models https://arxiv.org/abs/2503.06643, a concurrent work released just in the last few days. (No expectation that it already have been in your paper, of course!) Other Strengths And Weaknesses: See other sections. Other Comments Or Suggestions: The pdf repeatedly ran into rendering issues on my machine; my first guess would be that the images in Figure 5 are too large, but I have not investigated further. It might just be an issue on my end. I report this just for your information. line 12 (the first line) typo: missing space line 19: remove "to be" Why is Section 5 titled "Application"? That seems like an oversight. Questions For Authors: Is Sonnet sufficient for applying this task generation procedure to more complex tasks than HumanEval and MBPP? Is there evidence to validate the speculation that Qwen2.5-Coder-7B contains data contamination (or that the other models do not) beyond that from this benchmark? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for reviewing our paper and valuable comments --- > ​**Empirical values of our theorical bounds** To empirically evaluate the collision rate of our method, we conduct an experiment on HumanEval. First, we run DyCodeEval on HumanEval to generate an initial set of transformed programming problems. We then repeat this process \( N \) times \( N = 10, 20, 30, 40, 50 \) and measure: 1. **Repeat rate** – the proportion of problems from the initial transformed set that reappear in the subsequent \( N \) runs. 2. **Collision rate** – the proportion of problems within the \( N \) runs that are duplicates of any previously generated problem, regardless of whether they match the initial set. To highlight the advantages of our hierarchical transformation, we compare it against a baseline where we prompt an LLM (using the following prompt) to generate a new programming problem from a given seed. We report both the repeat rate and collision rate for this baseline as well. Baseline Prompt ``` Rewrite the following problem description to create a new problem description for new scenario\n\n Original Problem Description\n {ori_inst}\n\n Please ensure to put your rewritten problem description in <new_problem></new_problem> tags. ``` | \# Of Run | Ours | Ours | Baseline | Baseline | | --------- | -------------- | ---------------------- | -------------- | ---------------------- | | | \# of repeated | repeated rate | \# of repeated | repeated rate | | 10 | 0 | 0 | 3 | 0.018292683 | | 20 | 0 | 0 | 4 | 0.024390244 | | 30 | 0 | 0 | 8 | 0.048780488 | | 40 | 0 | 0 | 9 | 0.054878049 | | 50 | 0 | 0 | 9 | 0.054878049 | | \# Of Run | Ours | Ours | Baseline | Baseline | | --------- | --------------- | ----------------------- | --------------- | ----------------------- | | | \# of collision | collision rate | \# of collision | collision rate | | 10 | 0 | 0 | 7 | 0.042682927 | | 20 | 0 | 0 | 17 | 0.103658537 | | 30 | 0 | 0 | 30 | 0.182926829 | | 40 | 0 | 0 | 36 | 0.219512195 | | 50 | 0 | 0 | 39 | 0.237804878 | --- > ​**Is Sonnet sufficient for more complex tasks** To assess whether Sonnet is sufficient for more complex tasks, we apply our approach to LiveCodeBench, a **competition-level** programming benchmark. The following table shows the number of tokens in these three datasets. For LiveCodeBench, we randomly selected 100 seed and transformed programming problem pairs and evaluated their semantic equivalence. Our analysis found that 92 out of 100 pairs were semantically equivalent, demonstrating the effectiveness of our transformation approach. | Dataset | Min | Avg. | Max | | ------------- | --- | ----- | --- | | LiveCodeBench | 54 | 242.7 | 693 | | HumanEval | 4 | 55.6 | 430 | | MBPP | 7 | 16 | 47 | --- > ​**Other evidence to validate that Qwen2.5-Coder-7B contains data contamination** Another indication that Qwen2.5-Coder-7B may be overfitted comes from LiveCodeBench, where its evaluation also shows an unusually large accuracy drop on newly collected programming benchmarks, similar to our findings in Figure 6. --- > ​**Essential References Not Discussed** We will add all mentioned paper to related work.
Summary: This paper introduces a novel code LLM benchmark that leverages metamorphic testing to address challenges associated with current benchmarks' reliance on publicly available, human-curated datasets. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: None Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: 1. I have concerns about the benchmark's discriminative power. For instance, in the right subplot of Figure 5 (MBPP), only a few models appear as outliers. Additionally, open-source models seem to perform as well as closed-source models on the new benchmark, which raises doubts about whether the benchmark's difficulty is sufficient to track future model advancements. 2. The paper lacks references to highly relevant works, such as Li et al. "EvoCodeBench: An Evolving Code Generation Benchmark with Domain-Specific Evaluations" (NeurIPS 2024) . The existence of such closely related work significantly undermines the claimed novelty. Other Comments Or Suggestions: Many presentation details require improvement. For example, the images in Figure 6 appear truncated, which impacts readability. Questions For Authors: Please see `Other Strengths And Weaknesses`. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thanks for reviewing our paper and valuable comments. > ​**Concern about the benchmark's discriminative power** We appreciate the reviewer’s feedback and the opportunity to clarify our findings. However, we believe there may be some misunderstandings regarding Figure 5. First, the lower number of outliers in Figure 5 does not imply that our benchmark lacks discriminative power. The key evidence for distinguishing overfitted models is in Figure 4, not Figure 5. In Figure 4, we fine-tune models with controlled contamination and observe a significant accuracy drop on our benchmark as leakage increases (second row). This confirms that models overfitted to contaminated data perform well on the original benchmark but fail on ours. In contrast, Figure 5 evaluates potential data contamination in publicly available LLMs, not discriminative power. Without access to these models’ training data, we cannot confirm overfitting but instead analyze their performance differences. Our findings show: 1) A linear relationship between accuracy on our benchmark and the original, indicating comparable problem complexity. 2) Anomalous behavior in certain models, such as Qwen2.5-Coder-7B, which experiences an unusually large accuracy drop, falling outside the 95% confidence region. While we cannot confirm contamination, this suggests potential data leakage, which is why we use the term “may be contaminated.” Second, the fact that open-source models perform similarly to closed-source ones does not mean our benchmark is too easy to track future advancements. Our focus is on *transparent* evaluation rather than increased difficulty. A reliable benchmark should measure true generalization while avoiding misleading performance inflation caused by data contamination. > **Relationship with EvoCodeBench** EvoCodeBench is constructed by collecting programming problems from GitHub, following a similar approach to LiveCodeBench, as discussed in our *Introduction* section. However, this method has several limitations: (1) It shifts the burden of manual question design to coding platform authors. (2) Since problems come from public GitHub repositories, existing models may have already seen them, raising concerns about data contamination. (3) EvoCodeBench relies on external contributions, leading to infrequent updates—the latest update, per their [website](https://huggingface.co/datasets/LJ0815/EvoCodeBench/tree/main), was nine months ago, which is inadequate given the rapid pace of model development. While DyCodeEval is fundamentally different, we acknowledge EvoCodeBench as related work and will include it in the related work section. However, its existence does not diminish our contributions, as DyCodeEval is fully automated and scalable. > **Figure 6** We have revised Figure 6, on our [website.](https://github.com/anonymousGithub2022/DyCodeEval/tree/main?tab=readme-ov-file#stability-of-dycodeeval)
null
null
null
null
null
null
NextCoder: Robust Adaptation of Code LMs to Diverse Code Edits
Accept (poster)
Summary: This paper addresses two issues: (1) enhancing code language models on code-editing tasks; and (2) mitigating catastrophic forgetting caused by task-specific fine-tuning. To address (1), it proposes a method for synthesizing high-quality code-editing data; to address (2), it introduces Selective Knowledge Transfer (SeleKT), a low-rank optimization technique that limits parameter updates. Experiments on four code-editing benchmarks demonstrate that the approach achieves state-of-the-art performance with models of comparable scale while preserving robust code generation capabilities. Claims And Evidence: The claims "Existing code LMs are deficient in handling diverse code-editing tasks" and "suppressing catastrophic forgetting in general models after fine-tuning" raise concerns about the real-world applicability of research in the field of code language models. While the authors criticize the quality of existing code editing dataset, my concern lies in why the LLM-generated dataset is realistic and representative? Specifically, the real-world code edits for a change-requirement (a.k.a, a commit) can distributed across many files in a code repository, how could the synthesized dataset to represent such distribution. Methods And Evaluation Criteria: Code editing can involve both edit location and edit generation, as in [2]. The authors seems to miss the edit generators such as [1] Saikat Chakraborty, Yangruibo Ding, Miltiadis Allamanis, and Baishakhi Ray. 2022. CODIT: Code Editing With Tree-Based Neural Models. IEEE Transactions on Software Engineering 48, 4 (2022), 1385–1399. https://doi.org/10.1109/TSE. 2020.3020502 [2] Chenyan Liu, Yufan Cai, Yun Lin, Yuhuan Huang, Yunrui Pei, Bo Jiang, Ping Yang, Jin Song Dong, and Hong Mei. CoEdPilot: Recommending Code Edits with Learned Prior Edit Relevance, Project-wise Awareness, and Interactive Nature (ISSTA 2024) Theoretical Claims: NA Experimental Designs Or Analyses: Lack of evaluation of the generated benchmark, why it is useful for training a practical code editing model. I suggest the trained model shall be evaluated on some real-world commit dataset as collected in [2]. [2] Chenyan Liu, Yufan Cai, Yun Lin, Yuhuan Huang, Yunrui Pei, Bo Jiang, Ping Yang, Jin Song Dong, and Hong Mei. CoEdPilot: Recommending Code Edits with Learned Prior Edit Relevance, Project-wise Awareness, and Interactive Nature (ISSTA 2024) Supplementary Material: NA Relation To Broader Scientific Literature: The topic of the paper is practical and useful. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: 1. How do you evaluate the usefulness of the generated training datasets? 2. Whether your approach can address the repository-level code editing, which is more practical in the real world? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful feedback on our work. We answer the questions asked by the reviewer below: > My concern lies in why the LLM-generated dataset is realistic and representative? Specifically, ..., how could the synthesized dataset to represent such distribution. Our synthetic data generation, by design, supports multiple correlated edits within a single task. Indeed, our pipeline generates multi-edit examples and we will add concrete examples in the appendix in the revised version. This approach mirrors real-world software development, where changes often span multiple classes and interrelated code segments. By generating data that captures the complexity of correlated changes, we ensure an authentic representation of complex code editing scenarios. To further validate our dataset's ability to handle multi-file edits, we finetuned larger QwenCoder-2.5 variants and evaluated them on [Aider Polyglot](https://aider.chat/2024/12/21/polyglot.html#the-polyglot-benchmark), a challenging benchmark with multi-file edit problems. Due to char limits, we are unable to give a concrete example in this response. However, we would be happy to share it if the reviewer wants, in another response. Our NextCoder models show significant performance gains, as summarized in the tables below. | **14B Models** | **Polyglot** | |:----------|:------------:| | QwenCoder-2.5-14B | 9.3 | | QwenCoder-2.5-14B-LoRA | 5.3 | | QwenCoder-2.5-14B-SFT | 3.1 | | **NextCoder-14B** | **12.2** | | **32B Models** | **Polyglot** | |:----------|:------------:| | QwenCoder-2.5-32B | 16.4 | | QwenCoder-2.5-32B-LoRA | 6.7 | | QwenCoder-2.5-32B-SFT | 8.4 | | **NextCoder-32B** | **21.9** | > I suggest the trained model shall be evaluated on some real-world commit datasets like [CoEdPilot](https://arxiv.org/abs/2408.01733). We agree that evaluation on real-world commits is essential -- and we have already done this. A subset of the NoFunEval benchmark [CoLM 2024], which includes five splits assessing a model’s ability to improve code on multiple non-functional requirements (e.g., runtime, safety, etc.), is derived from real-world commits in Android repositories. In particular, the latency and resource-utilization splits are derived from real commits. The results of these evaluations are discussed in Section 5.2 of our paper. Regarding CoEdPilot, while it represents an interesting benchmark derived from real-world commits, its focus on edit propagation (predicting edits across multiple locations based on patterns of previous edits) differs substantially from our work's objectives. Our approach aims to perform targeted code edits based on natural language instructions, rather than propagating edits across a codebase. Additionally, CoEdPilot's emphasis on fine-grained edit detection and classification at the line level (keep/insert/replace) wouldn't align well with our instruction-following paradigm. The fundamental difference is that our model requires explicit natural language instructions, whereas CoEdPilot infers edit patterns automatically. If required, we will manually write instructions for some of the instances from CoEdPilot and include results in the revised version. > Code editing can involve both edit location and edit generation, as in [CoEdPilot](https://arxiv.org/abs/2408.01733). The authors seems to miss the edit generators such as [CODIT](https://arxiv.org/abs/1810.00314). We thank the reviewer for pointing out CoEdPilot and CODIT; however, these methods target distinct scenarios. CoEdPilot focuses on edit propagation, predicting future edits based on past edits, and employs a fine-grained edit detection mechanism (Edit-propagating Line Locator) to classify edit types (keep/insert/replace) at the line level. In contrast, our approach finetunes a code LM to follow natural-language instructions for editing a given codebase, without requiring past edits or explicit edit localization. CODIT, on the other hand, predicts repetitive edits using a tree-based neural machine translation model, focusing on structured edit patterns rather than general-purpose instruction-following for code modifications. Additionally, CODIT predates modern LLM-based approaches, making its methodology different from ours. These distinctions clarify that our work does not overlook edit generators but instead addresses a separate formulation of the code-editing problem. We will incorporate these clarifications, comparing against CoEdPilot and CODIT, into the paper. > How do you evaluate the usefulness of the generated synthetic data? In Section 5.4, we provide empirical evidence demonstrating the utility of our synthetic data compared to traditional commit-based datasets. Specifically, we observe that fine-tuning DeepSeek-6.7B on our synthetic data yields a performance advantage over CommitPackFT (a filtered dataset of high-quality commit data from GitHub). --- Rebuttal Comment 1.1: Comment: I thank the authors' clarification, which largely addresses my concern. In this case, I would like to vote for this submission with weak acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for giving due consideration to our response and increasing your score!
Summary: The authors present a comprehensive approach to enhance the code editing capabilities of language models while maintaining their pre-existing abilities. Their work addresses two fundamental challenges in this domain: the scarcity of high-quality fine-tuning data for code editing tasks and the phenomenon of catastrophic forgetting during domain adaptation. The primary contributions of this research are threefold. First, the authors introduce a synthetic data generation pipeline to diverse code editing examples across eight programming languages. This pipeline systematically generates original code, modified code, and corresponding natural language instructions with varying levels of verbosity and styles, encompassing multiple edit types and code granularities. Second, the authors propose the Selective Knowledge Transfer (SeleKT) algorithm, a robust adaptation method that employs dense gradient-based steps to identify critical weights for code editing tasks, followed by sparse projections onto the base model to mitigate overfitting. Unlike conventional approaches that predetermine updatable weights, SeleKT dynamically reassesses weight importance throughout the fine-tuning process based on magnitude changes. Third, through empirical evaluation on four established benchmarks (CanItEdit, HumanEvalFix, NoFunEval, and Aider), the authors demonstrate that their adaptation of Qwen2.5-Coder-7B, named NextCoder, surpasses comparable models and even competes with substantially larger models on several tasks. Additionally, their experiments confirm that the SeleKT approach generalizes across model families, as evidenced by improvements in DeepSeekCoder-6.7B performance. ## update after rebuttal I thank the authors for their detailed response and my fellow reviewers for their insightful comments. Based on the additional ablation results, I have revised my score to 4. Claims And Evidence: Most of the claims made in the paper are generally supported by convincing empirical evidence including - NextCoder (using SeleKT) outperforms comparable models and even larger ones, on diverse code-editing benchmarks. - Proposed Automatic Synthetic data pipeline yields gains over CommitPackFT data baseline. 0 SeleKT preserves pre-learned abilities is validated by Table 5, which shows that models fine-tuned with SeleKT retain their code generation capabilities better than those fine-tuned with alternative methods. However, there are several claims which lack robust evidence. In particular - Synthetic Data quality and generalization: The claim that the synthetic data pipeline captures real-world diversity is less substantiated, as the analysis relies heavily on select benchmark performance. We know that one can improve task performance by obtaining synthetic samples similar to the end task distributions. Since authors have not done any analysis on generating data overlap with benchmark data, it’s even not clear if they are inadvertently overfitting to test. Further, the proposed pipeline solely relies on LLM based automatic checks and efficacy of such checks is not evaluated by the authors. - Lack of hyperparam tuning for baselines: While authors present extensive analysis for their own proposed algo SeleKT, I find the SFT, LoRA baseline numbers to be less convincing as authors show that SFT leads to huge degradation (65.6-> 59.5 on MBPP+). It’s not clear for how many steps the model was trained and what hyperpamas authors tried to reduce potential overfitting. - Lack of data scaling curve: On synthetic data, authors do not show how scaling up data size improves performance and when do we see some sort of performance saturation. - SeleKT generalization: I strongly suggest authors to measure generalization beyond Humaneval and MBPP. For example, they can also use MMLU or some non-code related dataset to truly measure degradation in model quality. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper are well-tailored to the problem of adapting code language models for diverse code edits. - Synthetic Data Generation Pipeline: The approach builds on previous successes in using synthetic data for instruction tuning (e.g., self-instruct methods in natural language processing) and extends those ideas to the code domain, addressing the limitations of using only mined commit data. The observed gains on multiple benchmarks (see Tables 1 and 6) validate the effectiveness of the proposed synthetic data approach. - Diverse Benchmark Datasets: The use of benchmarks such as CanItEdit, HumanEvalFix, Aider, and NoFunEval is appropriate because they capture a variety of editing scenarios—from function-level bug fixes to full file improvements and even non-functional aspects like performance or security. This diversity is crucial for assessing how well the adapted model handles the breadth of real-world code-editing tasks. Additionally, evaluation on generation benchmarks like HumanEval+ and MBPP+ ensures that any gains in code editing do not come at the expense of the model’s fundamental code generation and comprehension abilities. Theoretical Claims: The proof for Lemma 1 is correct in its intended scope—it shows that the selective update mechanism guarantees that at most $c$ parameters are altered relative to the base model. However, it is limited to a counting argument and does not offer any theoretical insights into the optimization dynamics or convergence behavior in non-convex settings. Experimental Designs Or Analyses: - Benchmark Selection and Evaluation: Authors used standard benchmarks for code edits (HumanEvalFix, Aider) as well as generic code generation performance (Humaneval, MBPP), which is a very reasonable choice. To measure if model retain performance on generic tasks, I would also preferred to include benchmarks beyond just code generation. - Quality and Representativeness of synthetic data: The soundness of the synthetic data generation is critical. While the pipeline is innovative, the quality of the synthetic examples depends on the underlying LLMs used for generation, which may introduce systematic biases or fail to capture rare code-editing scenarios. - Ablation Studies on Hyperparameters: For SeleKT algo, authors provide detailed ablations on key hyperparameters, such as the sparsity factor $alpha$. However, for baseline methods, authors don't provide details on hyperparam selection. - Ablations on synthetic data: While the proposed data generation pipeline is reasonable, paper lacks ablations related to quality of generated data beyond end task performance, how scaling up data improves performance. Supplementary Material: I have reviewed prompts used for data pipeline which looks reasonable to me. Relation To Broader Scientific Literature: - The idea of generating synthetic instruction-response pairs for fine-tuning has become a standard method in natural language processing, as seen in self-instruct pipelines (e.g., Wei et al., 2024a; Wang et al., 2023). The paper extends these ideas to code editing, building on methods like CodeAlpaca and Self-Instruct but specifically targeting the diversity of code modifications. Overall, it's a well-tested recipe to improve performance. - The challenge of catastrophic forgetting in fine-tuning has been extensively studied (Goodfellow et al., 2013; Kirkpatrick et al., 2017). SeleKT is motivated by these works, aiming to strike a balance between acquiring new task-specific knowledge and retaining general capabilities. Further, techniques such as LoRA (Hu et al., 2021) and recent sparse adaptation methods (Nguyen et al., 2024b) update only a subset of parameters to avoid overfitting. Unlike these methods, which select parameters a priori, SeleKT dynamically reassesses which parameters to update by computing the full gradient periodically, then selecting the top-k updates. This approach is more adaptive and echoes ideas from model merging techniques like TIES (Yadav et al., 2024) but integrates the selection process during training rather than post-hoc. Essential References Not Discussed: While one can always cite additional papers, I think authors have cited relevant literature to connect paper to existing ideas. Other Strengths And Weaknesses: - Lack of baseline optimization: It is unclear if hyperparameter settings have been optimized for baseline used in paper including SFT, PEFT. The effectiveness of these comparisons hinges on whether each baseline was optimally tuned. - Lack of data ablations on synthetic data quality: Authors should consider conducting scaling experiments to show how quantity of data improves performance and should conduct human annotations of a subsample to explain the quality and limitations of generated data. Other Comments Or Suggestions: I don't have other comments Questions For Authors: - Could you please provide SFT and LoRA fine-tuning details including hyperparams tried to reduce overfitting? - To measure model's ability to retain generic performance, can you also evaluate it on other tasks such as MMLU? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful feedback on our work. We answer the questions asked by the reviewer below: > Authors have not done any analysis on generating data overlap with benchmark ... The benchmarks considered in the paper are based on manually-created coding problems and solutions. Whereas, we used the training split of the StarCoder dataset, which is derived from GitHub commits, as our seed data. The synthetic data pipeline generates samples inspired by the seed data. This reduces the likelihood of unintended overlap with the benchmark data and prevents inadvertent overfitting. We agree on the importance of this point with the reviewer and quantitatively analyze overlap using the standard approach of decontamination used in StarCoder: [bigcode-dataset/decontamination](https://github.com/bigcode-project/bigcode-dataset/tree/main/decontamination). The decontamination report confirms 0% overlap between our training data and benchmarks. We will include a discussion on this and report the decontamination measurement in the revised version. > Lack of hyperparam tuning for baselines ... We have documented the specifics of our SFT and LoRA hyperparameters in Appendix A.1.2. For LoRA, we utilized the hyperparameters from the Qwen model official implementation [scripts](https://github.com/QwenLM/Qwen/blob/main/finetune.py#L55). To address the reviewer's concern, we have conducted an initial experiment on hyperparameter tuning for baseline methods. Given that SeleKT was trained with a learning rate of 1e-5 and weight decay of 0.0, we explored closer configurations for LoRA and SFT, testing learning rates of 2e-6 and 5e-6 with weight decay values of 0.10 and 0.05. Additionally, for LoRA, we experimented with ranks from 16 to 64 and alpha values of 8 and 16. While this tuning led to some improvements in both SFT and LoRA, a significant performance gap remains between these baselines and NextCoder-7B. We will include these updated results in the final version. > Not clear for how many steps the model was trained. We trained all our models for 3 epochs (Section 5.1). > Theoretical insights into the optimization dynamics or convergence behavior in non-convex settings Please refer to a detailed response to reviewer 3QHQ (3rd point). > Lack of data scaling curve Please refer to a detailed response to reviewer 3QHQ (2nd point). > To measure model's ability to retain generic performance, can you also evaluate it on other tasks such as MMLU? In addition to (a) the additional experiment on MMLU as suggested by the reviewer, we also conducted (b) experiments on GSM8K to further demonstrate the ability of our method to retain generic performance. For evaluation, we followed the few-shot setting (N=4) and the same prompt used in the Qwen models' official evaluation script. Given that our model is designed for code-related tasks, we focused on the STEM subset of MMLU, which contains 3.15K problems covering topics such as: Physics, Chemistry, Biology, Computer Science, Mathematics and Engineering. This subset aligns closely with the problem-solving and computational reasoning abilities expected from a code-editing model, making it a more meaningful evaluation of whether fine-tuning on code has impacted general problem-solving performance. For GSM8K, we considered the full benchmark. | Model | MMLU | GSM8K | | -------------------------- | -------- | -------- | | Qwen2.5-Coder-7B-Instruct | 53.0 | 83.40 | | Qwen2.5-Coder-32B-Instruct | 71.9 | 93.71 | | NextCoder-7B | 54.5 | 81.65 | | NextCoder-32B | 72.7 | 92.65 | The above table presents the accuracy scores for our NextCoder models alongside the Qwen2.5-Coder models. These results substantiate the robustness of our approach, in particular the absence of catastrophic forgetting is evident. We will add these results to the paper. > Underlying LLM for data generation might introduce systematic biases or fail to capture rare-code editing scenarios. The reviewer raises important issues which can affect effectiveness of synthetic data generation. To mitigate these issues, we implemented a multi-faceted approach to ensure diversity and reduce systematic biases. Our synthetic data generation strategy relies on a diverse set of seed data (which, for example, is significantly larger than [WizardCoder’s](https://arxiv.org/pdf/2306.08568) 20K instances) as the basis (Section 3, point i), ensuring a broad initial representation of code editing contexts. We further enhance the coverage/diversity of our generated scenarios by incorporating three randomly selected improvement areas (Section 3, point i) for each synthetic data instance. This approach helps prevent the model from converging on a narrow set of editing patterns. By deliberately introducing randomness through multiple improvement areas and using a diverse initial dataset, we have tried to mitigate bias and coverage concerns.
Summary: This paper proposes an approach to handling diverse code-editing requirements. First, it introduces a synthetic data generation pipeline that begins with seed code samples and applies various editing criteria to produce high-quality training data. This pipeline generates pairs of original and modified code along with natural language instructions in different styles and verbosity levels. Second, the paper presents SeleKT, a model adaptation algorithm that identifies the most crucial weights for code editing using a dense gradient-based step, followed by a sparse projection onto the base model to prevent overfitting. Experimental results show that the resulting model, NextCoder, achieves strong performance across multiple code-editing benchmarks, surpassing comparably sized models and even outperforming some larger models in code-editing tasks. ## update after rebuttal As the authors present well-designed research questions for studying human-machine alignment, supported by evidence-based performance improvements, the reviewer agrees that the proposed SeleKT methodology is both effective and practical in code-editing scenarios. Therefore, the reviewer raises the score. Claims And Evidence: - While the challenges discussed in the paper appear relevant to the code editing task, the reviewer is concerned that these issues are common across all machine learning tasks. The scarcity of high-quality fine-tuning data and the risk of catastrophic forgetting during fine-tuning are well-known problems in ML models in general. This raises concerns that the paper’s contribution may not be sufficiently distinct. Methods And Evaluation Criteria: - The reviewer has a particular interest in the quality of the generated dataset. In Line 210 (L), the authors state that they perform quality-based filtering by prompting the LLM to select high-quality examples. However, the reviewer believes that this step requires some level of human intervention, including: (1) Defining the criteria for assigning a specific score. (2) Providing demonstration examples to guide the LLM in evaluating dataset quality. (3) (If possible) Conducting an empirical study to assess whether the LLM's evaluation standards align with those of human experts. For examples of empirical study, please refer to section 4.4 in [1]. [1] AutoDSL: Automated domain-specific language design for structural representation of procedures with constraints, ACL’24 Theoretical Claims: - The reviewer has examined the proposed SeleKT algorithm for parameter optimization and considers it a general methodology for artificial neural network adaptation. However, the reviewer is uncertain whether its design choices are specifically tailored for the code editing task. Experimental Designs Or Analyses: - The reviewer has examined the result analysis section and agrees that the proposed SeleKT method enhances code editing efficiency and delivers better performance. Additionally, the reviewer considers SeleKT a general methodology applicable to training open-source ANN models for improved code editing. Supplementary Material: - The reviewer has examined the prompts presented in Appendix A.2. A minor concern is whether the prompt length may exceed the model's context window. The authors are encouraged to conduct an analysis study on prompt length to clarify its potential impact. Relation To Broader Scientific Literature: - The paper proposes an improved methodology for code editing, which is a code intelligence task that involves modifying code based on natural language instructions. Essential References Not Discussed: - No comment Other Strengths And Weaknesses: - No comment. Other Comments Or Suggestions: - [Table 3] There is excessive blank space below the table caption. Please check the typeset configuration. - [line 337] The model name “DeepSeek-R1-Qwen-7B” is missing the \textsf formatting. Questions For Authors: - No question. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful feedback on our work. We answer the questions asked by the reviewer below: > The scarcity of high-quality fine-tuning data and the risk of catastrophic forgetting during fine-tuning are well-known problems in ML models in general. This raises concerns that the paper’s contribution may not be sufficiently distinct. While we agree with the reviewer that catastrophic forgetting and lack of high quality finetuning data are well-known problems, we are unable to understand why this implies that our contribution is not sufficiently distinct. In fact, we do contrast and experimentally compare with some of the recent, key ML work in this space (PEFT/LoRA, model merging/TIES) throughout the paper. Our approach offers a conceptually novel solution. We introduce a novel weight update algorithm specifically designed to mitigate catastrophic forgetting during fine-tuning. Our strong results over model families, sizes, and benchmarks in the code-editing domain show improvements over standard techniques (SFT, LoRA, TIES). Further, our synthetic data generation pipeline also is a key contribution, that places emphasis on getting high-quality data tailored to the intricacies and diversity of real-world code-editing tasks and scenarios. Overall, our approach goes beyond existing methodologies and offers a nuanced, yet easy-to-implement approach to model adaptation. > The reviewer has examined the proposed SeleKT algorithm for parameter optimization and considers it a general methodology for artificial neural network adaptation. However, the reviewer is uncertain whether its design choices are specifically tailored for the code editing task. We agree with the reviewer about the potential generality of SeleKT. However, in this work, our motivation and focus is to improve the code-editing performance without sacrificing pre-learned abilities like code generation. The synthetic data generated for finetuning (Section 3) represents the design choices tailored specifically to the code-editing task. Code editing, and more generally coding models, is an important AI domain today, and our work clearly shows the method's potential. We note that in the present form, we have been careful not to make any claims about generality of SeleKT beyond what we demonstrate in the paper. We plan to investigate the applicability of SeleKT to more domains like math and natural language reasoning in the future. > The reviewer has a particular interest in the quality of the generated dataset. In Line 210 (L), the authors state that they perform quality-based filtering by prompting the LLM to select high-quality examples. However, the reviewer believes that this step requires some level of human intervention ... Conducting an empirical study to assess whether the LLM's evaluation standards align with those of human experts. Doing manual labeling for individual samples is impractical given the size of the synthesized dataset (100K's). We follow a long line of work in the literature on using LLM-as-a-judge to scale quality check. During the process of designing the synthetic data generation pipeline, we continuously monitored the quality of the generated data. Based on our observations, we implemented a stringent quality check process (Section 3, point iv; Appendix A.2, Figure 7) that filters out low-quality samples. Only instances meeting the specific criteria are retained. Recognizing the significant effort required for human expert labeling, our methodology provides a scalable alternative. Our experimental results clearly show improvements upon finetuning with the synthesized data. Nevertheless, following up on the reviewers' suggestion, we will conduct a study to evaluate agreement between the LLM and human reviewers on sample quality and include our findings in the paper. > A minor concern is whether the prompt length may exceed the model's context window (during data generation). Thank you for raising this concern. To clarify, we used GPT-4o and Llama-3.3-70B for data generation, both of which have sufficiently large context windows to accommodate our prompts. Therefore, exceeding the model’s context length was not an issue during the data generation process. --- Rebuttal Comment 1.1: Comment: Despite the solid theoretical analysis of the proposed NextCoder method, given the high-stakes nature of the software engineering domain, the reviewer believes that additional evidence is needed to demonstrate the practical applicability of this conceptually novel solution. In particular, human-machine alignment analyses are encouraged for authors to be conducted. The reviewer would be willing to reconsider the score if a pilot human-alignment study or a post-generation case study is provided. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback. Based on the reviewer's suggestion, we conducted a pilot human-study to assess the quality of the generated training dataset. We are providing the detailed results below and would be happy to discuss them further. **Study Design** In this study, we involved three participants who have 3-4 years of experience in software development with strong expertise in Python. We randomly selected 100 samples from the Python split of our synthetic dataset and asked participants to rate each sample on the scale of 1-5 (1 being poor quality and 5 being excellent quality) on the following three questions: - **Q1 [Instruction Usefulness]**: How well does the detailed instruction capture a potential code-editing scenario with respect to the original code? - **Q2 [Instruction Consistency]**: How consistent are the three styles of instructions (detailed, concise and human-like) with each other and with the respective styles? - **Q3 [Solution Correctness]**: How well does the edited code match the edit described in the detailed instruction? **Overall Assessment** In the table below, we present the mean (along with standard deviations) ratings by participant and by question. | **Metric** | **Participant 1** | **Participant 2** | **Participant 3** | **Overall Mean** | **Overall SD** | |--------------------------|-------------------------------|-------------------------------|-------------------------------|------------------|----------------| | Instruction Usefulness | 4.92 ± 0.27 | 4.93 ± 0.26 | 4.38 ± 0.60 | 4.74 | 0.48 | | Instruction Consistency | 4.29 ± 0.48 | 4.88 ± 0.32 | 4.55 ± 0.54 | 4.57 | 0.51 | | Solution Correctness | 4.92 ± 0.27 | 4.96 ± 0.24 | 4.60 ± 0.51 | 4.83 | 0.40 | | **Overall Mean ± SD** | **4.71 ± 0.46** | **4.92 ± 0.28** | **4.51 ± 0.56** | | | The scores are consistently close to the highest score of 5 across all participants and questions. This provides a strong indication of human-machine alignment, with low to moderate variance. This study helps validate that our synthetic data generation pipeline is able to generate samples that meet human expectations in terms of quality and consistency. This complements the theoretical and empirical evidence we provide in the paper. We will incorporate this study into the revised version of the paper, along with selected qualitative examples, to further validate the design of our data generation pipeline and quality of the generated synthetic training data. We thank the reviewer for suggesting the AutoDSL [ACL'24] paper. The AutoDSL paper and "How to do human evaluation: A brief introduction to user studies in NLP" [NLE'23] cited therein were useful references towards conducting the human study. **Score Distribution** All samples received scores 3 (neutral quality) or above on all the questions. We give the exact distribution below. | **Score (Higher is better)** | **Instruction Usefulness** | **Instruction Consistency** | **Solution Correctness** | |----------:|----------------------------:|-----------------------------:|---------------------------:| | 5 | 229 | 175 | 250 | | 4 | 65 | 122 | 48 | | 3 | 6 | 3 | 2 | | **Total** | 300 | 300 | 300 | **Common Observations for Neutral Ratings (Score 3)** We particularly inspected the samples that received the neutral rating (score 3) since those were perceived as relatively low-quality samples by one or more participants. We made the following observations: - **Instruction-Edit Misalignment**: In some cases, instructions correctly described the intent but the edits were not entirely appropriate. For example, in response to an instruction to handle datetime parsing, the edited code parsed dates against raw strings, which would cause runtime errors. - **Incomplete Error Handling**: Some examples did introduce error handling, but overlooked edge cases (e.g., what if the `tasks.json` file exists but is empty?). - **Style Inconsistency**: A few participants noted that stylistic or structural variations across instruction formats led to minor misunderstandings of the code-editing intent. ---
Summary: The paper introduces an adaptation method for code language models on code-edit tasks. The authors presents a synthetic data generation pipeline that creates code samples paired with edited versions and natural language instructions. The paper states that during fine-tuning, their SeleKT can update the model’s weights to avoid catastrophic forgetting and thus improve the performance. The authors also provides experiments to show that NextCoder outperforms comparable models on several code-editing benchmarks. Claims And Evidence: Claim: Selective adaptation via SeleKT improves editing performance without harming general code generation. Evidence: Experiments on 4 benchmarks (Canitedit, Humanevalfix, Nofuneval, Aider), with comparisons against other methods, experiments shows that their model performs better consistently. Methods And Evaluation Criteria: The approach to use a synthetic data pipeline to generate examples seems appropriate, the method that update the model weight seems working based on the experiments, the evaluations criteria also seems good. Theoretical Claims: The paper does not seem to have formal theoretical proofs. Experimental Designs Or Analyses: The designs were validated via 4 benchmark datasets and other baseline methods, which seems fine. However, it would be good if the authors could disscuss the reason for select some specific hyperparameters for seleKT. Supplementary Material: The supplementary materials includes additional details regarding the pipeline, which has prompts, streamline, and sample generated instance. Relation To Broader Scientific Literature: Their model, NextCoder, is based on previous work from Qwen2.5-Coder-7B, where they introduce new fine-tuning method to somewhat address catastrophic forgetting issue and resulted in a slightly better performance. Synthetic data generation and instruction tuning seems be related prior topics. Essential References Not Discussed: No. However, it would benefit if the authors could add some discussion of recent studies on selective parameter updating and fine-tuning methods. Other Strengths And Weaknesses: Strength: The comparison table is clear and comprehensive, the 4 benchmark is sufficient to demonstrate the model's gain; overall, the paper is presented quite clear. Weakness: limited theoretical analysis of the mechanism for ML interpretability, possible sensitivity in the hyperparameters Other Comments Or Suggestions: Maybe the authors could add some discussion of potential trade-offs on hypermeter selection. Questions For Authors: What is the scalability and robustness of the approach on larger/smaller datasets? For instance, besides from codes and instructions, would there be possible application of this on proteins and explanations? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful feedback on our work. We answer the specific questions below: > Reason for selecting some specific hyperparameters for seleKT. Our preliminary experiments indicated that a sparsity value of 5% and periodicity of 1 epoch were effective across model families, sizes, and benchmarks. Due to limited compute, we stick to these values in our experiments. Nevertheless, in Section 5.5, we present ablation on these two key hyperparameters of our method, namely, the sparsity $\alpha$ and periodicity $M$. By separately tuning the hyper-parameters for each model, we can get further improved performance for our approach. As suggested by the reviewer, we will include a discussion of hyperparameter selection and tradeoffs in the revision. > Robustness of the approach across dataset sizes In addition to answering the reviewer's question on (a) robustness to dataset sizes, we are also including new results on (b) robustness across model sizes. **(a) Robustness across dataset sizes** | Dataset size | CanItEdit | HumanEvalFix | Aider | | --------------- | --------- | ------------ | ----- | | Base model (QwenCoder-2.5-7B) | 48.1 | 73.8 | 59.4 | | 25% | 47.67 | 80.20 | 60.90 | | 50% | 48.57 | 81.43 | 62.70 | | 75% | 49.01 | 81.02 | 63.80 | | 100% (NextCoder-7B) | 50.48 | 81.10 | 65.70 | We appreciate the reviewer’s suggestion to evaluate effect of dataset size. To assess scalability w.r.t. to training data size, we finetuned the QwenCoder-2.5-7B model on varying fractions (random sampling 25%, 50% and 75%) of our dataset which includes both synthetic and CommitPackFT data. The results are presented in the table above. All models were trained for 3 epochs. The results show a clear trend: while performance on CanItEdit and Aider sees a drop at 25% w.r.t to the base model, increasing the dataset size consistently improves performance across all benchmarks (CanItEdit, HumanEvalFix, and Aider). **(b) Robustness across model sizes** Additionally, we are happy to share new results (the tables below) comparing the performance of our SeleKT algorithm across various model sizes against supervised finetuning (SFT) and parameter-efficient low-rank adaptation (LoRA) on multiple code editing benchmarks. | **3B Models** | **HumanEvalFix** | **CanItEdit** | **Aider** | |:----------|:----------------:|:-------------:|:---------:| | QwenCoder-2.5-3B | 73.2 | 37.1 | 36.8 | - | | QwenCoder-2.5-3B-LoRA | 64.6 | 36.2 | 35.8 | - | | QwenCoder-2.5-3B-SFT | **76.2** | 32.4 | 30.1 | - | | **NextCoder-3B** | 75.6 | **42.4**|**37.6** | - | | **14B Models** | **HumanEvalFix** | **CanItEdit** | **Aider** | **Polyglot** | |:----------|:----------------:|:-------------:|:---------:|:------------:| | QwenCoder-2.5-14B | 87.8 | 58.1 | 66.9 | 9.3 | | QwenCoder-2.5-14B-LoRA | 78.0 | 50.9 | 66.2 | 5.3 | | QwenCoder-2.5-14B-SFT | 79.9 | 42.4 | 36.8 | 3.1 | | **NextCoder-14B** | **89.8** | **60.2** | **72.2** | **12.2** | | **32B Models** | **HumanEvalFix** | **CanItEdit** | **Aider** | **Polyglot** | |:----------|:----------------:|:-------------:|:---------:|:------------:| | QwenCoder-2.5-32B | **90.2** | 61.0 | 72.9 | 16.4 | | QwenCoder-2.5-32B-LoRA | 82.3 | 52.4 | 60.2 | 6.7 | | QwenCoder-2.5-32B-SFT | 81.7 | 49.5 | 66.9 | 8.4 | | **NextCoder-32B** | 88.9 | **62.4** | **74.7** | **21.9** | For the smaller 3B model, NextCoder-3B shows significant improvements over the base model across most benchmarks, with a substantial gain on the CanItEdit benchmark (+5.3\%). For larger models, we have also included the latest and more challenging[ Aider-Polyglot benchmark](https://aider.chat/docs/leaderboards/) results (more details in response to reviewer itGF). > Limited theoretical analysis of the mechanism Though the focus of our paper is on rigorous empirical validation, we have made some progress on theoretical understanding, that we outline next. Under the standard smoothness assumption on $f$ and boundedness assumptions on the gradients and concentration assumption on the task vector stated below, we can show $O(1 /\sqrt{T})$ convergence for SeleKT. Due to char limits, we are unable to give a proof sketch here. **Task Vector Concentration Assumption** The task vector $\tau = \theta - \theta_{base}$ exhibits a concentration property with parameter $r >> 1$, such that the majority of information is contained in the top-$\alpha$ fraction of parameters. $$ \frac{{\sum_{i \in \text{top-}\alpha} |\tau_i|^2}}{\alpha N} \geq r \cdot \frac{{\sum_{i \notin \text{top-}\alpha} |\tau_i|^2}}{(1-\alpha)N} , \quad r >> 1 $$ Note that $r$ defines a certain margin between the task-specific parameters (i.e. top-$\alpha$ fraction of parameters) and the rest of the parameters (both suitably normalized). > Besides from codes and instructions, would there be possible application... Please refer to our response to reviewer UeAD (2nd point).
null
null
null
null
null
null
Causal Effect Identification in lvLiNGAM from Higher-Order Cumulants
Accept (poster)
Summary: This paper propose causal effect identification method for proxy variable setup and underspecified instrumental variable setup based on high-order cumulants. In the proxy variable setup, both multiple latent confounders and a causal edge from the proxy variable to the treatment are allowed while only one proxy variable is required. In the underspecified instrumental variable setup, multiple treatments are allowed while only one instrumental variable is required. Claims And Evidence: The claims made in the submission are supported by evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. Theoretical Claims: I have tried my best to read the theoretical proofs. However, I'm not familiar with the concepts defined in Definition A.1~A.3. In fact, I have never encountered those concepts except for a recent work (Tramontano et al., 2024b). Besides, the proofs in this paper use theorems in (Schkoda et al., 2024), which are quite complicated. Therefore, I cannot provide a reliable assessment for the correctness of theoretical proofs in this paper. Experimental Designs Or Analyses: I have no concern about the soundness/validity of the experimental designs or analyses. Supplementary Material: I didn't review the supplementary material. Relation To Broader Scientific Literature: The authors have discussed this in Impact Statement. Essential References Not Discussed: There is no related work that is essential to understanding the (context for) key contributions of the paper, but are not currently cited/discussed in the paper. But there are some recent works [1,2] that also use high-order cumulants for identification of LiNGAM with latent variables. I recommend the authors to discuss them. [1] Identification of causal structure with latent variables based on higher order cumulants. AAAI 2024. [2] Recovery of Causal Graph Involving Latent Variables via Homologous Surrogates. ICLR 2025. Other Strengths And Weaknesses: # Strengths 1. This paper is well-motivated, Tramontano et al., (2024b) provides profound identifiability results, but their identification method is based on OICA, which has not yet been equipped with consistency guarantees. To overcome this limitation, this paper provides an identification method based on high-order cumulants. 2. This paper is solid, it provides both theoretical results (although the proofs of which are difficult to check for me) and experimental results. # Weakness 1. Tramontano et al., (2024b) have already provided necessary and sufficient graphical conditions for generic identifiability, so this paper is not novel in terms of identifiability results, its novelty lies in only the identification method. 2. (Minor) According to the experimental results shown as Figure 6, the proposed method is not superior to that proposed by Tramontano et al., (2024b) when the sample size is not very large. Other Comments Or Suggestions: Personally, I think this paper is hard to follow because the theoretical tools used in this paper are not common in the community of causal inference. I fully acknowledge that this is not a deficiency per se, but this may limit the impact of this paper. Questions For Authors: It seems that the proposed method assumes that the underlying causal graph is known. Please correct me if I'm wrong. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments and helpful suggestions. - *There is no related work that is essential to understanding the (context for) key contributions of the paper but are not currently cited/discussed in the paper. However, there are some recent works [1,2] that also use high-order cumulants for identification of LiNGAM with latent variables. I recommend the authors discuss them.* We will include a discussion of these works in the related work section. Both of these papers focus on the problem of *causal discovery*—that is, learning the causal structure from observational data—rather than on the problem of *causal effect identification*. In the latter, the causal graph is assumed to be known, and the goal is to identify the causal effect of a treatment on an outcome (see, for example, the formal definition of this problem in the seminal work of Shpitser \& Pearl, (2006). In particular, [1] proposed a procedure for using high-order moments to test for the presence of an edge between two observed variables where there is exactly one latent confounder. It is noteworthy that Schkoda et al. (2024) extended this result to settings with an arbitrary number of latent variables. In [2], the authors considered specific causal graphs under the assumption that for each latent variable $L$ in the graph, there exists a ``homologous surrogate" (see the definition in [2]). They then derived a set of structural properties using equations involving cumulants to recover the underlying causal structure. It is important to note that all the graphs considered in our work do not satisfy the assumption that every latent variable has a homologous surrogate except the graph $\mathcal{G}_1$, with a single latent variable and no edge from the proxy variable to the treatment. **Shpitser \& Pearl, (2006)** - Shpitser \& Pearl, Identification of Joint Interventional Distributions in Recursive Semi-Markovian Causal Models, AAAI, 2006. - *It seems that the proposed method assumes that the underlying causal graph is known. Please correct me if I’m wrong.* As we mentioned above, in the problem of causal effect identification (the focus of the current work), the causal graph is known. - *Tramontano et al. (2024b) have already provided necessary and sufficient graphical conditions for generic identifiability, so this paper is not novel in terms of identifiability results; its novelty lies only in the identification method.* The identifiability results in Tramontano et al. (2024b) rely on the assumption that the full observed distribution is known. In contrast, our identifiability results require only knowledge of finitely many moments of the distributions, which is a strictly weaker assumption. Moreover, the estimation method proposed by Tramontano et al. (2024b) is based on solving an overcomplete Independent Component Analysis (OICA) problem, which is inherently non-separable. This implies that the true mixing matrix cannot be recovered solely by optimizing for independence among the exogenous noise—precisely the approach taken by their algorithm. Consequently, their method fails to consistently estimate the correct solution. This limitation is evident in our experiments, where we observe that GRICA’s accuracy does not improve monotonically with sample size. - *(Minor) According to the experimental results shown in Figure 6, the proposed method is not superior to that proposed by Tramontano et al. (2024b) when the sample size is not very large.* One possible explanation is that cumulant-based methods process unbiased estimates of high-order cumulants (order 4 or higher), also known as k-statistics. While these estimators are unbiased, they tend to have high variance for small sample sizes. In contrast, GRICA solves an optimization problem involving the $\ell_1$-norm of the observed samples, which have lower sample variance. As a result, for small sample sizes, GRICA may yield a lower mean squared error due to reduced variance. However, since the solution obtained by GRICA is not asymptotically unbiased, it cannot provide an asymptotically consistent estimator—unlike our proposed method. We will add a remark on this point in the final version of the manuscript. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. I have also read the authors' rebuttal to other reviews. I still have some concerns as follows. 1. In the problem of causal effect identification, the causal graph is **not** always assumed to be known. For instance, Tramontano et al. (2024b) have proven identifiability of causal effects in the case where the causal graph is unknown and the case there the causal graph is known separately. Therefore, I don't think the key distinction between causal discovery and causal effect identification lies in whether the causal graph is known. 2. Although Schkoda et al. (2024) focus on the problem of causal discovery, they also estimate the causal effects. Specifically, line 6 and 7 of Algorithm 1 in Schkoda et al. (2024) estimate both causal effects and cumulants of exogenous noise. This is very similar to Algorithm 1 in this paper. According to my understanding, Algorithm 1 in Schkoda et al. (2024) can identify causal effects even when the causal graph is unknown while Algorithm 1 in this paper can only identify causal effects when the causal graph is known. Please correct me if I'm wrong. 3. In line 233~234, the authors claim that " the cumulants of different exogenous noises are generically distinct". I think the authors may implicitly assume that the distributions of exogenous noises are not symmetric. As we know, for any random variable with symmetric distribution and any odd number $k$, $k$-th cumulant of the random variable is 0. I'm happy to raise my score if the authors can address the above concerns. ===========After further rebuttal============ Most of my concerns have been addressed. As I promise, I increase my score to 4. --- Reply to Comment 1.1.1: Comment: - *In the problem of causal effect identification, the causal graph is not always assumed to be known. For instance, Tramontano et al. (2024b) have proven identifiability of causal effects in the case where the causal graph is unknown and the case there the causal graph is known separately. Therefore, I don't think the key distinction between causal discovery and causal effect identification lies in whether the causal graph is known.* Yes, indeed, there has been recent work on the joint identification of the graph and the causal effect. While this is certainly an interesting line of research, there is a much wider and well-established literature that considers the two problems as separate problems. In this work, we follow this latter tradition and assume that the causal graph is known. It is important to note that the a priori knowledge of the causal graph leads to simpler identification procedures and, consequently, more efficient estimators (that are also more easily applied to real data). To make our point in an oversimplified setting: Suppose we know that $X$ causes $Y$ and that no latent variables are at play. Then the causal effect of $X$ on $Y$ may be estimated by standard least squares regression. This is in contrast to more involved procedures that first need to resolve the causal direction and rule out the presence of latent variables. - *Although Schkoda et al. (2024) focus on the problem of causal discovery, they also estimate the causal effects. Specifically, line 6 and 7 of Algorithm 1 in Schkoda et al. (2024) estimate both causal effects and cumulants of exogenous noise. This is very similar to Algorithm 1 in this paper. According to my understanding, Algorithm 1 in Schkoda et al. (2024) can identify causal effects even when the causal graph is unknown while Algorithm 1 in this paper can only identify causal effects when the causal graph is known. Please correct me if I'm wrong.* It is true that when the causal effect is identifiable without knowledge of the graph, i.e., when Theorem 3.3 in Tramontano et al. (2024b) applies, Algorithm 1 in Schkoda et al. (2024) also identifies the correct causal effect. However, there are two issues with this approach: 1. There is a loss of statistical efficiency in jointly estimating the graph and the effect, as is apparent in Fig. 6. (This was also our point above). 2. There are instances in which the causal effects of interest are not identifiable without knowledge of the graph, such as the underspecified instrumental variable graph we consider in Section 3.2. In this case, an approach that does not explicitly impose causal assumptions on the graph would fail to identify the correct causal effect. This can be verified both theoretically (using Theorem 3.3 in Tramontano et al. (2024b)) and empirically through our experiments in Section 6.2. - *In line 233~234, the authors claim that "the cumulants of different exogenous noises are generically distinct". ... As we know, for any random variable with symmetric distribution and any odd number-th cumulant of the random variable is 0.* It is important to emphasize that the identifiability results in our work are all *generic*. Simply put, our statements hold for randomly sampled cumulant tensors (see Sec. 2.3 for a formal definition). This framework is arguably the most natural and compelling for studying causal effect identification in linear structural equation models, and it is indeed the approach used in all the papers mentioned above. In contrast, *global* identifiability—i.e., identifiability that holds for all cumulant tensors—has already been fully characterized (see Drton et al. (2011)) and is too restrictive to accommodate many relevant scenarios, such as Instrumental Variable regression. For a detailed discussion on the distinction between global and generic identifiability in linear structural equation models, we refer to Part III of Drton (2018). Regarding the reviewer's statement, it is indeed true that if the exogenous noise distributions are symmetric, their $(2k-1)$-th cumulants vanish, making them indistinguishable based on odd-order cumulants. However, this issue is already accounted for in our notion of genericity. Specifically, for any $k$, the set of cumulant tensors corresponding to symmetric distributions forms a measure-zero subset of $\mathcal{M}^{(\leq k)}(\mathcal{G})$. In other words, cumulant tensors associated with symmetric distributions are not generic. Practically, even when restricting to symmetric distributions, our approach remains valid by considering only even-order cumulants. This comes at the cost of using higher-degree cumulants than $k(l)$, but the core identifiability argument remains intact. **Drton et al. (2011)** - M. Drton, R. Foygel, and S. Sullivant, Global identifiability of linear structural equation models, 2011. **Drton, (2018)** - M. Drton, Algebraic problems in structural equation modeling, 2018.
Summary: The paper studies the problem of estimating causal effects in lvLiNGAM via higher-order cumulants. Specifically, the authors consider two setups where a single proxy variable exists and a instrumental variable (IV) exists with multiple treatments. In both settings, the authors provide the effect identification results and corresponding estimation methods. The conducted experiments verify the effectiveness of proposed methods. Claims And Evidence: The claims are generally well-supported by theoretical analysis. Methods And Evaluation Criteria: The proposed method is reasonable and well-motivated. Theoretical Claims: The theoretical claims are clearly presented, and the proofs are well-structured. Experimental Designs Or Analyses: The experimental design and analyses are generally sound. Supplementary Material: The supplementary material provides detailed proofs for the theoretical results. I read some of it. Relation To Broader Scientific Literature: NAN Essential References Not Discussed: NAN Other Strengths And Weaknesses: strengths 1. The paper is well-written. 2. This paper extends the original results of [1] that can only identify the effects up to some equivalence values. With the help of additional variables, proxy or IV, this paper shows the effect is identifiable. Weaknesses or Questions 1. Lack of intuitive example or explanations for the main theorems. It could be better to remove the proofs in the main text into appendix and add more discussion. 2. What is the key role of the addition variables (proxy and IV) compared with the results in [1]? How does it benefit the identification? 3. Why does the method of 'Cumulant with Minimization' only occur in the experiments regarding the graph $\mathcal{G}_3$. Reference: [1] Schkoda, D., Robeva, E., and Drton, M. Causal discovery of linear non-gaussian causal models with unobserved confounding. Other Comments Or Suggestions: Typo: line 131: $\mathcal{M}(G))$ should be $\mathcal{M}(G)$ line 153: contain should be contains line 665: fist should be first Questions For Authors: See Weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments and helpful suggestions. - *Lack of intuitive example or explanations for the main theorems. It could be better to move the proofs in the main text into the appendix and add more discussion.* We will use the additional page in the final version of the paper to provide further explanations and enhance the clarity of our theorems. In short, all our theorems are based on the following intuition. Theorem 3.1 establishes that there exists a polynomial of degree $l+1$ (where $ l $ is the number of latent variables in Fig. 1) whose coefficients can be computed using only the cumulants up to order $k(l)$. The roots of this polynomial correspond exactly to the causal effects of the variables $L_i$ for $1\leq i \leq l $ and $ V_1 $ on $ V_2 $. Solving this polynomial system reduces the space of possible solutions for the causal effect from $ V_1 $ to $ V_2 $ to a finite set of size $ l+1$. To further refine this solution and identify the correct causal effect uniquely, we need to derive additional equations that can only be satisfied by the true causal effect. This requires analyzing the nonlinear relationships among the cumulants of the observed distribution. Theorems 3.4--3.7 detail how this approach applies to the respective graphs. - *What is the key role of the additional variables (proxy and IV) compared with the results in [1]? How does it benefit the identification?* The problem addressed in [1] pertains to *causal discovery*—learning the causal structure from observational data—which is distinct from the problem of causal effect identification. Specifically, [1] seeks to recover all causal graphs consistent with the observed data, but multiple such graphs may exist, leading to different possible causal effects. In contrast, causal effect identification assumes that the causal structure is known and focuses on determining the effect of a treatment on an outcome. This distinction is fundamental in the literature (see, for example, the seminal work by Shpitser \& Pearl, (2006). Following this framework, we assume that the causal graph is given and aim to uniquely identify the target causal effect. **Shpitser \& Pearl, (2006)** - Shpitser \& Pearl, Identification of Joint Interventional Distributions in Recursive Semi-Markovian Causal Models, AAAI, 2006. - *Why does the method of 'Cumulant with Minimization' only occur in the experiments regarding the graph?* As mentioned in the final paragraph of Page 6, for the proxy setup shown in Figure 3 with a single latent variable (i.e., graph $\mathcal{G}_3$), we proposed an optimization-based technique that relies on computing lower-order cumulants compared to those used in the standard "Cumulant" method. We refer to this approach as ``Cumulant with Minimization", and it is tailored specifically to the structure of $\mathcal{G}_3$. As such, its results are presented only for graph $\mathcal{G}_3$ in Figure 6. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. Most of my questions are addressed. I will keep my score leaning towards acceptance.
Summary: This paper explores causal effect identification in latent variable Linear Non-Gaussian Acyclic Models (lvLiNGAM) using higher-order cumulants, addressing two challenging scenarios involving latent confounding: (1) a single proxy variable that may influence the treatment and (2) underspecified instrumental variable (IV) cases with fewer instruments than treatments. The authors theoretically prove that causal effects are identifiable under these conditions and propose corresponding closed-form estimation methods. Experimental results demonstrate the accuracy and robustness of the proposed approaches. ## update after rebuttal I thank the authors for their rebuttal. My score remains unchanged. Claims And Evidence: The claims in this paper are clear and convincingly supported. Methods And Evaluation Criteria: This paper builds on the existing literature on LiNGAM-based structure discovery, shifting focus toward causal effect identification, which has received comparatively less attention. Following the lvLiNGAM framework, the study aims to identify specific entries of the mixing matrix using finitely many cumulants of the observational distribution under two challenging settings. By leveraging higher-order cumulants, the proposed method introduces the following advancements: a). Single Proxy Variable Setting: It allows a causal edge from the proxy to the treatment, thereby relaxing the previous assumption that each latent confounder must have exactly one proxy variable. b). Underspecified Instrumental Variable Setting: It relaxes the assumption that the number of instrumental variables must be at least equal to the number of treatments. The proposed approach appears methodologically sound, with a well-justified use of cumulants for causal effect identification. Theoretical Claims: I have looked through the theorems and proofs and did not find evident issues. However, as I am not an expert in the causal effect identification domain, I will defer to other reviewers for further validation during the rebuttal phase. Experimental Designs Or Analyses: This paper evaluates its proposed methods through experiments on synthetic data across several representative causal graphs. For each of the two settings, the authors compare their approach against baseline methods, with results demonstrating its advantages. While the findings support the effectiveness of the proposed methods, applying them to a real-world dataset may further strengthen the validation and highlight their practical applicability. Supplementary Material: I skimmed the supplementary material but didn't delve too deeply into it. Relation To Broader Scientific Literature: This paper focuses on relaxing assumptions and enhancing identifiability in causal effect estimation, which contributes to the broader literature on causal inference. By improving identifiability, the proposed approach may increase the applicability of causal effect estimation methods to real-world problems, making them more practical in settings where strict assumptions are difficult to satisfy. Essential References Not Discussed: I didn't find other essential references that are not discussed. Other Strengths And Weaknesses: All are listed above. Other Comments Or Suggestions: All are listed above. Questions For Authors: All are listed above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed comments and helpful suggestions. - *While the findings support the effectiveness of the proposed methods, applying them to a real-world dataset may further strengthen the validation and highlight their practical applicability.* Following the input of the reviewer, we have evaluated our methods on the data from Card and Krueger (1993), similar to the experiments conducted by Kivva et al. (2023). We provide a brief overview of the results here, while a more complete discussion will be included in the revised version of the manuscript. The goal of the study is to estimate the effect of the minimum wage on the employment rate. In our experiments, we utilized the same preprocessing procedure in Kivva et al, (2023) and found that the results of our method, when assuming graphs $\mathcal{G}_1$ or $\mathcal{G}_2$, are consistent with findings in the existing literature. Specifically, the estimated causal effects under $\mathcal{G}_1$ and $\mathcal{G}_2$ are 2.68 and 2.71, respectively. Previous methods, such as the cross-moment approach (Kivva et al., 2023) and the Difference-in-Differences method, also provide an estimate of 2.68. In contrast, when assuming $\mathcal{G}_3$ as the true graph, we estimate a causal effect of 8.26. While this result still indicates a positive effect of the treatment on the outcome—consistent with prior work—the value of the estimated causal effect diverges from the ones reported in the literature. This suggests that, for this dataset, a causal graph excluding an edge from the proxy to the treatment (as in $\mathcal{G}_1$ or $\mathcal{G}_2$) may provide a better representation of causal relationships. The code to reproduce the results can be found at https://anonymous.4open.science/r/CEId-from-Moments-20AC/estimation/real_data.ipynb. **Card and Krueger (1993)** - D. Card and A. B. Krueger. Minimum wages and employment: A case study of the fast food industry in new jersey and pennsylvania, 1993. **Kivva et al., (2023)** - Y. Kivva, S. Salehkaleybar, N. Kiyavash, A Cross-Moment Approach for Causal Effect Estimation, NeuriPS 2023.
null
null
null
null
null
null
null
null
Federated Node-Level Clustering Network with Cross-Subgraph Link Mending
Accept (poster)
Summary: In this study, the authors investigate two unexplored issues in federated graph learning (FGL), namely: 1) the heavy reliance on labeled graph samples that are difficult to obtain in real-world applications; and 2) the inevitable missing links caused by partitioning a complete graph into several subgraphs. To address these issues, the authors propose an easy-to-understand federated learning algorithm, named FedNCN, which introduces a dynamic graph construction scheme to mend the missing links among subgraphs. The main results on benchmark datasets show improvements in clustering performance compared to existing methods. Claims And Evidence: The evidence provided in support of the claims in this paper is convincing. Methods And Evaluation Criteria: Yes, the evaluation criteria make sense. Theoretical Claims: The paper primarily focuses on experimental validation, and no major theoretical inconsistencies were identified. Experimental Designs Or Analyses: Through a series of experiments, the authors verify the effectiveness of the method. Compared to existing advanced FGL methods, FedNCN achieves better results. Supplementary Material: The authors provide detailed supplementary material to further illustrate some details omitted in the main paper due to the space limitations. Relation To Broader Scientific Literature: This topic and the obtained findings are interesting to the federated graph learning community Essential References Not Discussed: No, the related works that are essential to understanding the key contributions of the paper have been included in the manuscript. Other Strengths And Weaknesses: Strengths: The article is well-motivated. It is meaningful and reasonable to provide some insights about the newly designed choices, which are valuable for researchers in related areas. In addition, some experimental results seem good. Weaknesses: (W1) There are still some issues that should be further discussed. For example, in Table 1 on page 6, although the proposed method achieves the best performance on the Questions dataset compared to the advanced FGL method, its NMI and ARI scores are relatively low compared to other datasets. What are the potential reasons for this phenomenon? FedNCN first maximizes edge construction and then minimizes edge retention. In future work, adversarial learning [1, 2] could be used to further refine the entire process of FedNCN. [1] Gong L, Zhou S, Tu W, et al. Attributed Graph Clustering with Dual Redundancy Reduction[C]//IJCAI. 2022: 3015-3021. [2] Suresh S, Li P, Hao C, et al. Adversarial graph augmentation to improve graph contrastive learning[J]. Advances in Neural Information Processing Systems, 2021, 34: 15920-15933. (W2) The text contains some redundant expressions that should be optimized. For example: 1) Left column, lines 331-334: "Here, 'Local' denotes the use of only our local model, while in 'FedAvg', 'FedProx', and 'FedPer', different aggregation methods are applied by the server, with the client all using our local model." This content has already been described in the main text and is reiterated in the table caption, leading to redundancy. It is recommended to express it only once to avoid repetition. 2) Right column, lines 344-346: "'BS' denotes the local model of FedNCN. 'BS+GKS' denotes the FedNCN without mending the missing links, and 'BS+GKS+CLM' denotes the FedNCN." This description is repetitive and can be condensed to improve readability and logical clarity. Other Comments Or Suggestions: No further comments or suggestions. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response to Reviewer: **(W1) :** **Reasons for the low NMI and ARI:** Thanks. The Questions dataset exhibits a significant class imbalance, with 47461 samples for class 0 and 1460 for class 1, which could result in lower NMI and ARI performance. Although class imbalance presents significant difficulties, our approach demonstrates competitive results relative to existing advanced methods. In future work, we will further improve FedNCN to enhance its performance in scenarios with class imbalance. **Future optimization directions:** Thank you for your valuable suggestion. We agree that our method could indeed be further improved by incorporating adversarial learning [1, 2] in future work. Specifically, there are two steps in our paper about maximizing edge construction and then minimizing edge retention. Transforming these current two steps into a dynamic optimization framework using adversarial learning could enhance the robustness of FedNCN in server mending missing links. We will consider this direction in our future work to further refine the proposed FedNCN. **(W2) :** **Expression:** Thanks. We have tried our best to avoid redundant expressions according to your advice. In addition, we will further refine the writing in the final version of the paper to make it more concise. Thanks for your constructive comments, we hope that our responses will make you satisfied. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response, most of my previous concerns have been addressed. As discussed in the response, introducing the idea of adversarial learning has the potential to further improve the current version. Additionly, I would like to know how the performance of FedNCN would be if we directly upload the representative signals to construct the global graph without maximizing edge construction and minimizing edge retention. --- Reply to Comment 1.1.1: Comment: # Response to Reviewer: Thank you for your follow-up and constructive feedback. To address your question, we have conducted additional experiments to compare the clustering performance between the proposed FedNCN and its variants. Specifically, in our setups, the "FedNCN_v1" method denotes a variant of FedNCN that directly uploads representative signals to construct a global graph for global missing link recovery. Moreover, the "Local" method denotes the local model of the proposed FedNCN. For convenience, we present the corresponding results in the hyperlink due to the space limit. As seen in this URL: https://anonymous.4open.science/r/FedNCN-rebu2-FF6C/metric.png, several major observations can be found: 1) compared to the "Local" method and the "FedNCN_v1" method, FedNCN produces ACC performance gains of 26.07% and 28.65% on the Photo dataset in the non-overlapping setting with 5 clients, indicating that the proposed cross-subgraph links mending strategy plays an essential role in effectively handling federated node-level clustering; 2) compared to the "Local" method, the "FedNCN_v1" method even performs worse in many cases. For instance, on the Computer dataset in non-overlapping setting with 10 clients, the ACC of the "FedNCN_v1" method drops by 10.03% compared to the "Local" method. These findings demonstrate that the direct application of KNN graph construction for global link recovery inadequately captures the relationships between intra-cluster and inter-cluster nodes in the uploaded signals, resulting in sub-optimal performance. We hope that our responses will address your concerns.
Summary: This paper designs a federated graph learning framework called Federated Node-Level Clustering Network (FedNCN) that mends the cross-subgraph missing links to enhance the clustering performance of each client in an unlabeled circumstance while not sharing private data. The work also conducts experiments on several benchmark datasets using the proposed method. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes, I believe that the methods and evaluation criteria presented in this paper are meaningful for the proposed problem and have been clearly described. Theoretical Claims: The stability guarantee provided by FedNCN is both interesting and well-supported. Experimental Designs Or Analyses: Yes, the author has provided a detailed analysis. Supplementary Material: Yes, I have reviewed the supplementary material including detailed explanations of the notation, additional experiments, and the implementation of the algorithms. Relation To Broader Scientific Literature: Different from existing supervised federated graph learning methods, the proposed federated learning framework is the first to explore the issue of missing links in an unsupervised setting. Essential References Not Discussed: No. Other Strengths And Weaknesses: Pros: - In contrast to existing unsupervised FGL methods, FedNCN is proposed to address the issue of missing links caused by graph partition, which reveals a certain novelty. - The motivations are presented clearly, and each innovation has an intuitive explanation. In particular, the proposed cross-subgraph link mending strategy is a significant technical contribution to the field of node-level FGL. - The results show that FedNCN consistently outperforms existing methods, even when fewer real labels are used. This robust experimental evidence supports the paper's claims. Cons: - Both the local model and the global model in FedNCN compute cluster centers. What is the difference between these two types of clustering centers? Moreover, are these two cluster centers updated iteratively, or are they calculated only once? The authors need to give an instruction to make the reader understand them. - In Table 1, what does the "*" next to the methods signify? The paper does not provide an explanation for this. Could the author provide a clear explanation for the readers? - To enhance the reproducibility of the experiments, the authors should share the source code of the proposed method. - The limitations are not discussed in the paper. The authors need to provide a brief discussion on the limitations of the study to offer a more balanced and comprehensive evaluation of FedNCN. This would provide valuable guidance for future research in unsupervised FGL. - Some types need to be carefully checked and corrected, such as, “... crucial role in FedGCN...” --> “... crucial role in FedNCN...”. Other Comments Or Suggestions: See the weaknesses. Questions For Authors: See the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response to Reviewer: **1. Clustering centers:** Thanks. In FedNCN, both the client and the server have models that are used to learn the graph embeddings. The method for calculating the cluster centers is the same for both, initialized by K-means and updated iteratively by the model. However, the former (local model) learns the local graph data assigned to each client to obtain key clustering signals for subsequent cross-subgraph links restoration, while the latter (global model) learns the mended graph to obtain consensus prototypes that guide each local model to better cluster. **2. Explanation:** Thanks. In our manuscript, the methods marked with \* indicate that supervised methods are adapted to an unsupervised scenario. **3. Source code:** Thanks. The complete source code and datasets will be available for reproducibility if the paper is accepted. **4. Limitations and future work:** Thanks. Our method is designed for federated node-level tasks with cross-subgraph link mending in the unsupervised scenario. However, many scenarios involve federated graph-level clustering in practical applications, such as social network analysis and disease prediction. In federated graph-level clustering, each client learns from independent graph-level datasets, which exhibit significant structural heterogeneity. The divergence in multi-source data sharing is further exacerbated due to the lack of label guidance, making it difficult for our current model to effectively address these issues. Future work can explore a cross-domain federated learning framework to be applied to broader scenarios. **5. Detailed issues:** Thanks for your careful review. We have revised the sentence “the hyperparameter $k$ indeed plays a crucial role in FedGCN, ...” to “the hyperparameter $k$ indeed plays a crucial role in FedNCN, ...”. We will try our best to double-check the manuscript carefully and correct similar typos in our final version. Thanks for your constructive comments, we hope that our responses will make you satisfied.
Summary: The authors propose a Federated Node-Level Clustering Network (FedNCN), which is the first attempt to tackle the issue of link missing caused by graph partition in an unsupervised learning scenario. The core idea of FedNCN is to mend the destroyed links using prior clustering knowledge. Extensive experiments have been conducted to evaluate the performance of FedNCN. Claims And Evidence: Yes, the claims made in the submitted paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, this paper uses benchmark datasets and widely accepted evaluation metrics in the field. Theoretical Claims: The author suppose that connected nodes typically have high feature similarity, which is evident and has been used in previous work. Experimental Designs Or Analyses: The experimental setup is clearly presented, and the comparative experiments, ablation studies, and hyperparameter analysis are comprehensively covered. Supplementary Material: The authors provided additional details in terms of the experimental results and the algorithm process. Relation To Broader Scientific Literature: The paper proposes a federated graph learning framework that is capable of mending the destroyed subgraph links across clients. Essential References Not Discussed: All the references that are crucial to the key contributions of this paper have been cited and discussed. Other Strengths And Weaknesses: Strengths 1. Overall, I think this is a good work that takes a step forward in unsupervised federated node-level graph learning for missing links. This work effectively leverages the prior learned clustering knowledge to enable the model to accurately conduct cross-subgraph link mending, which promotes the great clustering encoding capacity of the local model. 2. The problem addressed in this paper is evident, and the innovations are novel, demonstrating its research value. Moreover, the overall structure of the paper is well-organized. Necessary illustrative figures are provided to help readers understand the contents. 3. Experiments on five graph benchmark datasets demonstrated the effectiveness and superiority of the proposed FedNCN against its competitors. Furthermore, ablation studies and convergence analysis further confirm its strong potential for practical applications. Weaknesses 1. The authors should further explain the relationship among the three proposed components, i.e., the local model learning strategy, the cross-subgraph link mending strategy, and global knowledge sharing strategy. 2. A few errors need to be checked and corrected. For example, in Figure 2, do 'clustering signals' and 'uploaded signals' refer to the same entities? If so, they should be consistently noted in the figure. 3. I have some questions about certain contents in this paper. In the left column on page 6, lines 320-322: “Here, we consider the scenario where there are no overlapping nodes between subgraphs. ” What does the scenario with overlapping nodes refer to, and why is this scenario not considered in this paper? Are there any other possible scenarios? Other Comments Or Suggestions: All concerns are presented as weaknesses. Questions For Authors: Please check the weaknesses in “Other Strengths And Weaknesses” section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response to Reviewer: **1. Relationships between three components:** Thanks. In our approach, the local model learning strategy collects and preserves more reliable clustering signals to prepare for the recovery of damaged sample connections. The cross-subgraph link mending strategy utilizes the prior learned clustering knowledge to establish correct links between subgraphs, which provides high-quality data for global consensus learning. The global knowledge sharing strategy learns the clustering-friendly consensus prototypes based on the mended graph and ensures reliable feedback for each client, enhancing the discriminative ability of each local model in graph-level embeddings. These three components are seamlessly integrated into a unified optimization framework. **2. Fig. 2 revision:** Thanks for your careful review. In Figure 2, 'clustering signals' and 'uploaded signals' refer to the same entities, and we have revised the notation for consistency according to your advice. We will update Fig. 2 in the final version. **3. Definition of “overlapping”:** The term "overlapping" has been widely present in previous studies on federated graph learning [1, 2], referring to the scenario where different subgraphs are distributed across multiple clients and share some nodes. In contrast, since the problem definition in our paper involves the structure loss caused by partitioning a complete graph into several subgraphs, we do not consider node overlapping and instead focus on the scenario where there are no overlapping nodes between subgraphs. [1]Baek, J.; Jeong, W.; Jin, J.; Yoon, J.; and Hwang, S. 2023. Personalized subgraph federated learning. In *ICML*, 1396-1415. [2] Zhu, Y.; Li, X.; Wu, Z.; Wu, D.; M, Hu.; and Li, R. 2024. FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning. In *IJCAI*, 5716–5724. Thanks for your constructive comments, we hope that our responses will make you satisfied.
Summary: This paper introduces federated node-level clustering that achieves cross-subgraph link mending under unsupervised circumstances. The proposed approach is mainly composed of three components, i.e., the local model learning scheme that collects and preserves trustworthy clustering signals for destroyed sample link restoration, the cross-subgraph links mending scheme that establishes correct links among subgraphs with the aid of prior learned clustering knowledge, and the global knowledge sharing scheme that learns high-quality consensus features based on the mended graph and ensures reliable feedback to each client. Abundant experiments on five benchmark datasets have been done. Claims And Evidence: The authors claim that they use graph kernel similarity and N-cut to dynamically construct the mended graph However, there seems to be a lack of discussion about the intuition behind using such a method. For example, why use dynamic graph construction instead of KNN graph construction? Methods And Evaluation Criteria: The authors employ five benchmark datasets (i.e., CiteSeer, PubMed, Amazon-Computer, Amazon-Photo, and Questions) and four widely used evaluation metrics (i.e., ACC, NMI, ARI, and F1) to evaluate the proposed method. Theoretical Claims: Yes. I have checked. It would be better that the authors further prove the advantages of dynamic graph construction compared to the traditional KNN-based approach. Experimental Designs Or Analyses: Yes, the experimental results are sufficient, and the corresponding analyses are self-consistent. Supplementary Material: Yes, I have reviewed all content in the appendix. Relation To Broader Scientific Literature: The paper builds on recent advances in federated graph learning models, extending ideas from prior literature such as FedPUB and FedTAD. It attempts to introduce a novel method of dynamic graph construction in unsupervised settings, using graph kernel similarity and N-cut to enhance the clustering performance of each local model. Essential References Not Discussed: This paper utilizes federated learning to train GCN models with reduced communication overhead, and the work on distributed graph learning with cross-client edges using homomorphic encryption [1] should be discussed. [1] Yao, Yuhang, et al. "FedGCN: convergence-communication trade-offs in federated training of graph convolutional networks." In NeurIPS 2023. Other Strengths And Weaknesses: Advantages: i. Novelty and Innovation: This paper proposes a new federated node-level clustering network for cross-subgraph link restoration, which is a fresh perspective in federated graph learning. ii. Technical Contribution: The authors design three components (the local model learning, the cross-subgraph links mending, and the global knowledge sharing) that are seamlessly integrated into a unified optimization framework, offering a more reasonable way to mend cross-subgraph links. Disadvantages: i. Some concerns: a) Can you describe what is the clustering ground truth in this task? Is it simply the node label? b) How are the parameters of each local model initialized in the federated learning framework? How is the initialization of the learnable weight matrix W achieved? More explanations should be given. c) Federated graph-level clustering also represents a variant of unsupervised federated graph learning. Could the proposed network be extended to handle this task? d) It would be better that the authors further prove the advantages of dynamic graph construction compared to the traditional KNN-based approach. ii. Minor writing issues: a) The dataset names are not consistent. For example, Table 1 and Table 2 use 'Amazon-Computer' and 'Amazon-Photo,' while Figures 3, 4, 5, and 6 use 'Computer' and 'Photo.' While these do not significantly affect the content, they should be corrected to improve professionalism and readability. b) The presence of unnecessary punctuation marks (e.g., extra period at line 104 in the left column). Other Comments Or Suggestions: N/A. Questions For Authors: a) Can you describe what is the clustering ground truth in this task? Is it simply the node label? b) How are the parameters of each local model initialized in the federated learning framework? How is the initialization of the learnable weight matrix W achieved? More explanations should be given. c) Federated graph-level clustering also represents a variant of unsupervised federated graph learning. Could the proposed network be extended to handle this task? d) It would be better that the authors further prove the advantages of dynamic graph construction compared to the traditional KNN-based approach. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Response to Reviewer: **i. Some concerns:** **a) Clustering ground truth:** Thanks for your question. The clustering ground truth is the true cluster category of samples. In our paper, it corresponds to the node label. The ground truth is used only in the evaluation stage of this task. **b) Parameter initialization:** Thanks. We do the initialization by following the parameter initialization principles in federated learning referring to previous work [1, 2]. The learnable weight matrix **W** is randomly initialized at the global server and subsequently delivered to the local models for learning. [1] Baek, J.; Jeong, W.; Jin, J.; Yoon, J.; and Hwang, S. 2023. Personalized subgraph federated learning. In *ICML*, 1396-1415. [2] Zhu, Y.; Li, X.; Wu, Z.; Wu, D.; M, Hu.; and Li, R. 2024. FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning. In *IJCAI*, 5716–5724. **c) Federated graph-level clustering:** Thanks. It is indeed an interesting and meaningful task. Federated node-level clustering clusters nodes in only one graph, while federated graph-level clustering clusters multiple graphs. Both clustering methods work in the way of federated learning. The proposed network encounters several difficulties when handling the federated graph-level clustering task. For instance, compared to node-level tasks, graph-level tasks involve data that may originate from different domains and have more complex graph structures. This makes it difficult to capture common patterns across multiple clients. In future work, we aim to overcome these challenges and extend our model to federated graph-level clustering tasks. **d) Advantages:** Thanks. Due to the non-uniform distribution of instances in the sample space, the traditional KNN graph construction method has inherent flaws [1-3]. Using different values of $K$ for different classes outperforms using a fixed $K$ across all classes, Li et al. have drawn this conclusion through mathematical derivation. Based on this theoretical claim, we further demonstrate the advantages of dynamic graph construction through experiments, compared to the traditional KNN-based approach. The experimental results have been provided at the anonymous link https://anonymous.4open.science/r/FedNCN-rebu-418F/data.png [1] S, Zhang. 2020. Challenges in KNN Classification. *IEEE Transactions on Knowledge and Data Engineering*, 4663-4675. [2] S, Kazemi.; R, Goel.; K, Jain.; I, Kobyzev.; A, Sethi.; P, Forsyth.; and P, Poupart. 2020. Representation learning for dynamic graphs: A survey. *Journal of Machine Learning Research*, 1-73. [3] M, Munir.; W, Avery.; M, Rahman.; and R, Marculescu.; 2024. Greedyvig: Dynamic axial graph construction for efficient vision gnns. In *CVPR*, 6118-6127. [4] B. Li.; Y. Chen.; and Y. Chen. 2008. The nearest neighbor algorithm of local probability centers", In *IEEE Transactions on Systems, Man, and Cybernetics: Systems*. 141-154. **ii. Minor writing issues:** **Typos:** Thanks for your careful review again! These typos have been revised. a) We have made the dataset names consistent in Table 1, Table 2, and Figures 3-5. b) We have removed unnecessary symbols (e.g., the extra period at line 104 in the left column). Moreover, we have tried our best to correct similar typos and double-checked throughout the paper. **Essential References Not Discussed:** **Adding the reference:** Thanks for your careful review again! The FedGCN [1] method that you mentioned has been discussed in the final version. [1] Yao, Yuhang, et al. "FedGCN: convergence-communication trade-offs in federated training of graph convolutional networks." In NeurIPS 2023. Thanks for your constructive comments, we hope that our responses will make you satisfied. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. I have also reviewed the comments from the other reviewers and the corresponding replies from the author. I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for your support and thoughtful suggestions. We will carefully revise the final version based on your valuable feedback.
null
null
null
null
null
null
On the Statistical Mechanisms of Distributional Compositional Generalization
Accept (poster)
Summary: The paper investigates the statistical mechanisms underlying Distributional Compositional Generalization (DCG), focusing on two key questions: 1) whether methods for one DCG task generalize to others, and 2) what statistical properties determine a learning algorithm’s compositional ability. The authors propose an invariant measure (μ) to unify diverse DCG methods, highlighting data adaptivity as critical for non-trade-off improvements. They derive a generalization bound decoupling IID error and compositional error, linking compositional capacity to mutual information and compatibility between learning algorithms and composition rules. The analysis emphasizes the role of statistical dependencies and algorithmic sensitivity in DCG, offering theoretical insights complementary to prior work. Claims And Evidence: The claims are supported by theoretical derivations but lack empirical validation. Methods And Evaluation Criteria: The theoretical framework is logically structured, using mutual information and compatibility measures to analyze DCG. However, the absence of empirical benchmarks or synthetic experiments weakens validation. The reliance on abstract statistical formulations may limit applicability to specific DCG tasks without further domain-specific adjustments. Theoretical Claims: The proofs for Theorem 4.7 and Theorem 5.5 rely on assumptions (e.g., L-bounded errors, composition rule recoverability in Assumption 5.4) that are plausible but not rigorously justified. The mutual information term \(I_{\mathcal{A}}(f_S; T|P_S^{(T)})\) is central but lacks intuitive interpretation in practical settings. Experimental Designs Or Analyses: No experiments are provided to validate the theoretical claims. The analysis remains purely mathematical, leaving open questions about the practical relevance of the proposed measures in real-world DCG tasks. Supplementary Material: Yes, all parts. Relation To Broader Scientific Literature: The work extends statistical learning theory to DCG, contrasting with IID-based generalization frameworks. It connects to NFL theorems by addressing task trade-offs but focuses specifically on compositional rules. Comparisons to Ben-David et al.’s domain adaptation bounds highlight novel aspects (algorithmic compatibility), though empirical validation is missing. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths include a novel statistical perspective on DCG and the invariant measure unifying diverse methods. Weaknesses include overly abstract formulations and lack of empirical grounding. Clarity suffers from dense notation and undefined terms. Other Comments Or Suggestions: 1. Terms like "knowledge composition" need clearer definitions. 2. The organizational structure of the paper needs adjustment, as readers without theoretical background find it difficult to follow. Questions For Authors: 1. How might the μ-measure be empirically estimated or validated in practice? Could this inform method selection for DCG tasks? 2. Assumption 5.4 requires recovering \(T\) from \(P_E^{(T)}\). Is this feasible for complex composition rules (e.g., in NLP or robotics)? 3. Are there plans to test the theory on synthetic or real-world DCG benchmarks (e.g., SCAN, COGS)? 4. Could the "composition rule" \(T\) be explicitly defined for canonical DCG tasks (e.g., attribute recombination)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > W1: lack empirical validation. We present new experimental results to validate the theoretical derivations (**see the "Experiments" section in the Rebuttal of Reviewer XpR4 for details**). Below is a summary of our findings: 1. We confirm that non-trade-off improvements are strongly correlated with adaptivity of learning algorithms. 2. The results demonstrate that our bound is tighter than previous methods, supporting the importance of incorporating information about the learning algorithm and data for tighter bound. > W2: The reliance on abstract statistical formulations may limit applicability. The goal of this paper is to offer a broader perspective on DCG tasks. We fully acknowledge the importance of applying the framework to specific DCG tasks; however, substantial efforts have already been dedicated to this aspect in previous works. Rather than replicating those efforts, our paper seeks to address a gap that has not yet been fully explored. The value of our work should not be assessed in isolation. When considered within the broader context of the research community—where numerous theoretical studies on specific tasks already exist—our work provides a complementary perspective that deepens the overall understanding of DCG tasks. > W3: Terms like "knowledge composition" need clearer definitions. Knowledge composition refers to a learning algorithm's ability to understand and integrate individual components. Specifically, it reflects the algorithm's performance on DCG tasks when the impact of limited data is eliminated (i.e., given an infinite amount of data). To avoid ambiguity, we plan to replace "knowledge composition" with "component composition" and provide a clearer explanation. **If there are other terms that require clarification, we would greatly appreciate your feedback and immedately address them.** > W4: readers without theoretical background find it difficult to follow The more examples and background will be provided. The examples will cover the core concepts and the assumptions. The background will be listed in the Appendix to cover the basic knowledge of the statistic machine learning. **The detail can be seen in the Section "More examples and background" of Rebuttal to Reviewer nkeW**. We hope that these effort can make the non-theoretical background reader more easy to read. > Q1: Estimate of $\mu$ or validated in practice? Could this inform method selection for DCG tasks? The core value of $\mu$ measure is to provide the analysis of the trade-off and non-trade-off improvement. We design a synthetic task for verify the trade and non-trade-off improvement (See **W1**). This has the following indication for the method selection for DCG: 1) To improvement the learning algorithm in non-trade-off way, it is counterproductive to impose inductive biases or constraints, such as the group constraint, as this would cause the model to loss the data adaptivity ability. A more effective approach is to design a model that can fully leverage the information in the data. 2) Co-design of the learning algorithm and data engeering is importance, as the non-trade-off improvement replies on a large value of $I_{\mathcal{A},\beta}(\tilde{T}=T,P_S)$. The compatibility between the data the learning algorithm is essential for generalization. More detail information can be seen in **Q1 in the response to Reviewer nkeW** > Q2: Clarify Assumptions ? Assumption 5.3 requires that the error is bounded. For example, the 1-accuracy, which ranges between 0 and 1, can satisfy the assumption. Assumption 5.4 requires that the DCG problem is solvable which made the problem meaningful. Specifically, given all distributions, we can learn how the given components are combined. For example, if provided with images of various shapes and colors, we should be able to understand how shape and color interact to form specific images, such as a the image of red triangle. This assumption guarantees that there **exists** a way to recover these compositional rules from the data. > Q3: Are there plans to test the theory on synthetic? See W1 > Q4: "composition rule" (T) for canonical DCG tasks? Composition rule is the rule of how the two components are combined. It can have different representation dependent on the data generative process. In image creation, the composition rule can be represented as ``Draw the contour of a $<$shape$>$ and fill it with $<$color$>$.''. Similarly, for a robotic task, the composition rule can be expressed as “First, complete $<$subtask1$>$, then $<$subtask2$>$.”. The compositional rule can also be non-text representation. If the image is generated by a generative model, then the compositional rule can be represented as part of the generative model that focus on the combination of the two components. However, as long as these representation describe a same rule for component composition, we regard them as the different representation of a same composition rule. --- Rebuttal Comment 1.1: Comment: Thanks for the authors, they solved my concerns. I raise the score to weak accept.
Summary: In this paper, the authors introduce a statistical framework to address two important research questions that have not been explored in prior work. Specifically, they examine whether a method designed for one DCG problem can be applied to another and identify the statistical properties that indicate a learning algorithm's capacity for knowledge composition Claims And Evidence: The theoretical claims in the paper are supported by proofs. However, it would be ideal to include some experimental validation, even in simple toy settings, to further substantiate the findings. Methods And Evaluation Criteria: Not applicable. Theoretical Claims: The detailed proofs provided in the appendix were not reviewed, but all the claims presented in the main text were verified, and no issues were found. Experimental Designs Or Analyses: There are no experimental designs included. However, the analysis of the generalization bound, accounting for errors from two sources—insufficient data and knowledge composition—was examined. The analysis appears to be correct. Supplementary Material: No. Relation To Broader Scientific Literature: In contrast to prior works that focus on DCG problems with specific composition rules, the proposed invariant measure explores the relationships between different composition rules. Additionally, the theorems presented are not confined to any particular learning problem. More importantly, the proposed theoretical framework emphasizes non-tradeoff improvements in DCG problems. This is the first paper to decouple the influence of finite samples and knowledge composition in generalization analysis for out-of-distribution settings. The proposed bound is tractable for various composition rules and establishes a connection between generalization behavior and mutual information. Essential References Not Discussed: Essential references are discussed in detail, along with a thorough explanation of how the proposed framework differs from prior works. Other Strengths And Weaknesses: ### Strengths 1. The paper addresses two important research questions in the area of DCG, and the proposed framework offers unique insights. 2. The paper is well-written, particularly the clear distinction between prior works and the current study. This effectively highlights the uniqueness of the proposed framework. 3. The analysis and theoretical framework presented in the paper will contribute to the development of better methods and provide a strong theoretical foundation for future research in this area. ### Weaknesses 1. There is a lack of experimental support or analysis, even in synthetic scenarios. 2. While the paper is generally well-written, it would be easier for readers to grasp the key ideas if the authors provided more intuitive explanations. For example, the paper introduces concepts like red rectangles and blue triangles, as well as Examples 3.2 and 3.3, but similar connections are missing in later sections. It would be beneficial if the authors explained major claims and conclusions from the theoretical study in a similar intuitive manner. 3. The practical implications of the proposed framework should be discussed in much more detail than in the current version. Other Comments Or Suggestions: diagram (1) in Section 3.3 would benefit from a more detailed description. Questions For Authors: **Questions** 1 (Important). In the non-tradeoff scenario, the authors propose keeping the $\mu$ measure fixed and suggest data-centric approaches as a viable solution. Could the authors clarify how data-centric methods can effectively achieve this? Specifically, what should data-centric approaches focus on? Which aspects of training data development (such as collection, labeling, preparation, reduction, or augmentation) should practitioners prioritize for DCG, and what should be the key objectives during the data engineering phase? 2. Why were insufficient data and lack of knowledge composition considered the major factors contributing to generalization error? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## More background and examples ### Background: We will include a new section in the Appendix to provide additional background knowledge about **statistical machine learning**. The structure is: 1) Key concepts, including data space, learning algorithms, function space, and the i.i.d. (independent and identically distributed) assumption. 2) Generalization analysis with Rademacher complexity as a example. 3) Current applications of generalization bounds . 4) extensionto out-of-distribution settings. ### Examples: **Example for Compositional rule**: Composition rule is the rule of how the two components are combined. It can have different representation dependent on the data generative process. In image creation, the composition rule can be represented as ``Draw the contour of a $<$shape$>$ and fill it with $<$color$>$.''. Similarly, for a robotic task, the composition rule can be expressed as “First, complete $<$subtask1$>$, then $<$subtask2$>$.”. The compositional rule can also be non-text representation, as long as it defines how two components are combined. **Example for inductive bias**: Inductive bias refers to a model's inherent preference for certain compositional rules before it is exposed to any training data for a given task. This bias can be introduced in two primary ways: Model Architecture Design – By carefully structuring the model, we can constrain its outputs to adhere to specific compositional rules. Pretraining & Objective Function – The inductive bias can also be shaped through pretraining strategies or the choice of objective function, either suppressing or reinforcing the model's tendency toward certain compositional behaviors. **Example for Assumption 5.3**: This assumption requires that the error is bounded. This assumption can be easily satisfied by modifying the original error using a $\min(\text{error}, \text{bound})$ operation. Alternatively, a bounded error measure, such as accuracy, which ranges between 0 and 1, can be used. **Example for Assumption 5.4**: This assumption ensures that, given all distributions, we can learn how the given components are combined. For example, if provided with images of various shapes and colors, we should be able to understand how shape and color interact to form specific images, such as an image of red triangle. **More examples** to demonstrate the indications of the theorem will be provided in the next version of paper. ## W & Q > W1: empirical validation See **''Experiments'' of rebuttal to Reviewer XpR4** > W2: No examples See "More background and examples" > W3: Practical implication See Q1. More detail implication will be added in the next version of our paper. > Q1: Properties of data-centric methods and requirement for data engineering phase? Our theory suggests that a data-centric approach is fundamental for achieving non-trade-off improvements. It highlights the following key properties of data-centric methods: 1) Data-centric methods should effectively leverage information from the data itself. Injecting human task-specific knowledge into method design—such as using specialized model architectures or loss functions—may hinder the method's ability to learn directly from the data. 2) Theory 5.5 further asserts that compatibility between the learning algorithm and the data is crucial. This implies that the learning algorithm should achieve uniform performance across different compositions within the support distribution. For example, if the support distribution includes red triangles and blue rectangles, the model’s performance on red triangles and blue rectangles should be similar. Regarding the requirements for the data engineering phase, our theory supports the co-design of both the solution (including network structure and objective function) and data collection. Since the value of $I_{\mathcal{A},\beta}(\tilde{T}=T,P_S)$ in our theory depends on both the learning algorithm and the data, our theory cannot prescribe a universally optimal data development method independent of the specific approach. However, certain data quality requirements, such as the absence of label noise, are absolutely essential. > Q2: major factors contributing to generalization error In the DCG problem, the support distribution provides only a subset of knowledge combinations—for example, a red rectangle and a blue triangle. To generalize effectively, the model must first learn the concepts of colors and shapes and then recombine the learned concepts of "red" and "triangle" to form the concept of a "red triangle." Based on this property, we identify two major limiting factors: insufficient data and a lack of knowledge composition. Without sufficient data, the model tends to overfit to the small portion of training data, preventing it from learning the correct concepts. Meanwhile, the lack of knowledge composition indicates that the model struggles to recombine learned concepts, limiting its ability to generalize.
Summary: The authors analyze the problem of Distributional Compositional Generalization (DCG). Compositionality in this sense is the ability to model different features in the dataset and the statistical dependency between them. They try to provide statistical tools to assess whether it is possible to transfer one DCG problem to another and the capacity of the learning algorithm for DCG tasks. They propose a measure (mu) which is based on the prediction function and invariant to the learning algorithm. They argue this measure can give insights on the DCG problem, with respect to different tasks and the respective learning algorithm. The work is highly abstract and hard to follow. There are many notations and definitions, it appears the work is intended for a nich group who specializes in this topic and formulation. Although the authors try to give some intuition and examples words in the beginning, they do not go back to it later. There is no example, no illustration, not even a toy problem, let alone a real learning problem where the theory is applied. Although some intresting insights may be learned here, I do not believe the venue of this conference is a good fit for this work. Claims And Evidence: see above Methods And Evaluation Criteria: see above Theoretical Claims: Several interesting theoretical claims. Experimental Designs Or Analyses: no experiments Supplementary Material: yes Relation To Broader Scientific Literature: Relations to the more abstract literature. Do not review literature on current compositional representations and methods. Essential References Not Discussed: see above Other Strengths And Weaknesses: see above Other Comments Or Suggestions: More illustrations and intuition, connecting the more abstract notions to an actual learning problem. Questions For Authors: See above. Ethical Review Concerns: no Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > W1: There is no example, no illustration, not even a toy problem Actually, we provide the Example 3.2, Example 3.3 and an illustration in Appendix (Page 17). We have added more examples and background (detail **See ''More background and Examples'' of the rebuttal to Reviewer nkeW**) and experiments on a simulation problem (See **''Experiments'' of rebuttal to Reviewer XpR4**). > W2: There are many notations and definitions, it appears the work is intended for a nich group who specializes in this topic and formulation. Our work presents the first statistical formalization of the DCG problems. We acknowledge that the paper introduces many notations and definitions, but this complexity reflects the inherent difficulty of the problem. To improve clarity, we have taken significant steps to simplify the presentation: 1) We include schematic diagrams (Eq.1 and Eq.12) and provide the examples(Example 3.2 and Example 3.3). 2) In the revised version, we have expanded the number of examples and added synthetic experiments to empirically validate our theoretical findings (**see W1**). If you have any **concrete suggestions**, we would gladly incorporate them and revise the paper **promptly**. And this paper is **never** intended for a nich group — if you believe there are aspects that may inadvertently limit its accessibility, we would appreciate further details on which groups might find the presentation challenging. > W3: Do not review literature on current compositional representations and methods. Thank you for your suggestions. In the Related Work section, we already cover disentangled representation learning, which we view as a subset of compositional representation learning. And, to better address your point, we have expanded this section to include a dedicated discussion on current compositional representation learning methods, incorporating your suggested direction. Additionally, we will integrate the following literature into this new discussion: [1] Compositional Generalization in Unsupervised Compositional Representation Learning: A Study on Disentanglement and Emergent Language [2] CORL: Compositional Representation Learning for Few-Shot Classification [3] Representation Learning of Compositional Data [4] Measuring Compositionality in Representation Learning [5] Rule-Guided Compositional Representation Learning on Knowledge Graphs [6] THE ROLE OF DISENTANGLEMENT IN GENERALISATION [7] Where’s the Learning in Representation Learning for Compositional Semantics and the Case of Thematic Fit > W4: I do not believe the venue of this conference is a good fit for this work. I totally disagree with the point that the ICML is not suitable for our paper. The reason is given as follow: 1. This paper focuses on analyzing machine learning algorithms in DCG tasks, making it highly relevant to the topics listed in the ICML call for papers, specifically under "Theory of Machine Learning" (including statistical learning theory, bandits, game theory, decision theory, etc.). 2. ICML accepts many papers with dense theoretical analysis, and the other reviewers have provided relatively lengthy review comments, suggesting a certain level of interest in this paper in the ICML community. 3. ICML is an open and diverse community, which is one of the reasons I deeply appreciate it. We hope to uphold this openness. I acknowledge that theoretical researchers are somewhat in the minority within the community, but ICML has always provided opportunities for underrepresented perspectives. --- Rebuttal Comment 1.1: Comment: In light of the rebuttal answers to my review and to the other reviewers, provided changes in the extended literature are given and additional examples and intuitions are added, I raise my rank for this paper.
Summary: This paper proposes a theoretical statistical framework for analyzing Distributional Compositional Generalization (DCG). An invariant measure is proposed to evaluate the generalizability of methods across DCG tasks and derive a generalization bound separating the effects of insufficient data from knowledge composition capabilities. Their findings highlight the role of mutual information and algorithm-rule compatibility in DCG performance. Claims And Evidence: The claims presented in the paper are largely theoretical and mathematical. However, they lack empirical validation or experimental evidence to demonstrate their practical applicability and effectiveness. Specifically: 1. Although mathematically derived and proven, the claim regarding its practical utility or interpretability lacks empirical demonstration. 2. The theoretical bound separating data insufficiency from compositional errors is presented; however, the absence of empirical or simulation-based evidence makes it challenging to assess its practical tightness or usefulness. Thus, while theoretically sound, the primary problematic claim is the applicability and practical relevance of these theoretical results without supporting empirical validation or concrete examples. Methods And Evaluation Criteria: The proposed methods—namely, the invariant measure, i.e., invariant measure and the derived generalization bound, are conceptually appropriate and logically consistent with the theoretical objectives of understanding DCG. However, the paper lacks specific evaluation criteria or benchmark datasets to practically validate these theoretical contributions. Introducing empirical evaluation or clearly defined benchmarks would significantly strengthen the practical relevance and interpretability of the theoretical findings. Theoretical Claims: Check claims and evidence Experimental Designs Or Analyses: NA Supplementary Material: Proofs Relation To Broader Scientific Literature: The paper extends statistical learning theory (e.g., PAC learning, NFL theorem) specifically to DCG. Unlike prior work focusing on specific DCG scenarios, it offers a general theoretical foundation. Essential References Not Discussed: NA Other Strengths And Weaknesses: Check claims and evidence Other Comments Or Suggestions: NA Questions For Authors: 1. Could you provide examples or scenarios (even simulated) that empirically validate the theoretical invariant measure and generalization bound? 2. How do you envision your theoretical findings practically influencing algorithm design or improvement in real-world DCG tasks? 3. Could you explicitly discuss and justify the assumptions used in your generalization bound? Under what real-world conditions might these assumptions fail? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## 1. Experiments ### 1.1. Experiment Design: **1. Components and Compositional rule**: We construct two words set A,B satisfying |A|=|B|=1000 and their corresponding element $a_1,a_2\subset A$ and $b_1,b_2\subset B$. $a_1,a_2$ is a partition of $A$ and the same as $b_1,b_2$. $|a_1 |=|a_2|=|b_1|=|b_2|=500$. The composition rule can be any function that satisfy the following form: $(e_1,e_2)→e_1 e_2 e_1 e_2 e_1 e_1$. And we construct 64 composition functions, referred as $T_1,T_2,\cdots,T_{64}$ **2. Distribution Split** The support distribution takes the elements in the set $\lbrace(e_1,e_2)|(e_1,e_2 ) \in a_1 \times b_1 \cup a_2\times b_1 \cup a_1\times b_2 \rbrace$. The target distribution take elements in the set $\lbrace(e_1,e_2)|(e_1,e_2) \subset a_2\times b_2 \rbrace$. It is easy to verify that these designs satisfy the requirement listed in Section 3. **3. Sequence design** The input sequence is “$e_1,e_2,r_1,r_2,r_3,$#”, where $r_1, r_2, r_3$ are random words that simulate the randomness. The expected completed sequence is “$e_1,e_2,r_1,r_2,r_3,$#$,e_1,e_2,e_1,e_2,e_1,e_1$” if the composition rule is $(e_1,e_2)\rightarrow e_1,e_2,e_1,e_2,e_1,e_1$. **4. Learning algorithm design** In our paper, we define the learning algorithm as the mapping between data and the learned function, encompassing a broader concept than just the optimizer. To simulate learning algorithms with varying inductive biases and adaptivity, we adopt the following approach: 1. We employ the GPT-2 model with two configurations: - **Setting 1:** 4 layers, 4 attention heads, and an embedding size of 128. - **Setting 2:** 6 layers, 8 attention heads, and an embedding size of 256. 2. We pretrain the GPT-2 model using different pretraining data schedules. The pretraining data is generated from a subset of composition rules same to those in the downstream task, but with entirely different words. This setup allows us to create learning algorithms with different inductive biases and adaptivity while preventing data leakage. ### 1.2. Experiemnts on trade-off and non-trade-off improvement On of the key point in this paper is that the non-trade-off improvement has to rely on the adaptivity of learning algorithm (detail see beyond trade-off page 5). To verify this conclusion, calculate $I_{\mathcal{A},\beta}(\tilde{T}=T,P_S)$, which is a measure of adaptivity used in out paper, and GACC, which is the average performance across all the tasks with compositional rule in $T_1,T_2,\cdots,T_{64}$. The results are: | $I_{\mathcal{A},\beta}(\tilde{T}=T,P_S)$ | 0.073 | 0.115 | 0.125 | 0.281 | 0.362 | 0.462 | 0.481 | 0.505 | 0.527 | 0.564 | | --- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | | GACC | 0.605 | 0.591 | 0.565 | 0.633 | 0.696 | 0.750 | 0.772 | 0.789 | 0.752 | 0.776 | ### 1.3. Experiments on Generalization Bounds We conduct the experiments with different rule complexity using the best pretrain setting in previous section. Rule complexity refers to the length of the rule on the output side. For example, the rule complexity of $(e_1, e_2) \rightarrow e_1, e_2, e_1, e_2, e_1, e_1$ is 6, while the rule complexity of $(e_1, e_2) \rightarrow e_1, e_2, e_1, e_2, e_1, e_1, e_1, e_2$ is 8. | Rule Complexity | 6 | 8 | 10 | 12 | | --- | --- | --- | --- | --- | | CG Error | 0.223 | 0.262 | 0.301 | 0.342 | | Ben-David et al. | 0.622 | 0.680 | 0.701 | 0.690 | | Ours | 0.271 | 0.295 | 0.351 | 0.372 | The results indicate that our generalization bound is more tighter than the bound of Ben-David et al. ## 2. Question > Q1: empirical validation See 1.Experiments. > Q2: envision practical influence? See **Q1 in the response to Reviewer nkeW** > Q3: justify the assumptions Assumption 5.3 requires that the error is bounded. If the solution performs poorly on a small subset of data points but performs well on the rest, the average error could be disproportionately large due to extreme errors in that small subset without this assumption. This assumption can be easily satisfied by modifying the original error using a $\min(\text{error}, \text{bound})$ operation. Alternatively, a bounded error measure, such as 1-accuracy, which ranges between 0 and 1, can be used. Assumption 5.4 requires that the DCG problem is solvable. Our bound does not apply to DCG problems that are entirely unsolvable. This assumption ensures that, given all distributions, we can learn how the given components are combined. For example, if provided with images of various shapes and colors, we should be able to understand how shape and color interact to form specific images, such as a the image of red triangle. This assumption guarantees that there **exists** a way to recover these compositional rules from the data. > w1: concrete examples. See **''More background and Examples'' of the rebuttal to Reviewer nkeW**
null
null
null
null
null
null
The Empirical Mean is Minimax Optimal for Local Glivenko-Cantelli
Accept (poster)
Summary: Background: In this work the authors investigate the question of estimating densities on $\lbrace 0,1\rbrace^\mathbb{N}$ where it is assumed that each index is independent and a sample has the form of a sequence of iid $Bern(p_i)$ random variables, so the density has the form $\mu = \prod_{i \in \mathbb{N}} \mu_i$, from iid samples of $\mu$. The authors are interested in the estimation loss of $\Delta_ n= \mathbb{E}\Vert p - \hat{p}\Vert_\infty$. Its not hard to see that without any assumptions on $(p_i)^\infty_{i=1}$, one cannot control $\Delta_n$ for any estimator, and especially not using the empirical PMF (EME). Previous work has looked into this and established a class of sequences $\mathsf{LGC}$ where the $\Delta_n$ can be bounded for the EME. In particular sequences $\mathsf{LGC}$ are sequences of $[0,1]^\mathbb{N}$ where all entries are less than or equal to 1/2, $p_i \downarrow 0$ and $(p_i)_{i=1}$ must go to zero a certain rate ((2) in the paper). Contribution: In this work the authors look at relaxed versions of the above setting by considering estimators other than EME and larger classes of densities. The larger class of densities they look at a class that is similar to $\mathsf{LGC}$, which we will call $\mathcal{P}$. In this $\mathcal{P}$ the authors relax $p \in [0,1/2]^\mathbb{N}$ to $p \in [0,1]^\mathbb{N}$ and instead of sequences going to zero they must concentrate near $\lbrace 0,1 \rbrace$ as $i\to nfty$: so $1/2- |p_i-1/2| \infty$, and that, for all $p \in \mathcal{P}$ fliping any number of entires (finite or infinite) about 1/2 (e.g. 0.1 \to 0.9) also lies in $p$. In Thm 2.1 the authors show that any learnable set of $\mathcal{P}$ must have a decay of $1/2- |p_i-1/2| \infty$ at the same rate as $\mathsf{LGC}$ thereby tightly delineating the boundary of this expanded collection of densities and showing that this expanded $\mathsf{LGC}$ in some sense contains all learnable densities. In Thm 2.2 they show that EME is nearly minimax optimal for learning $\mathsf{LGC}$. I would call these the _core_ results of the work, with 2.3 and 2.4 exploring the boundaries of the setting a bit more. ## Update after rebuttal I have raised my score by one. While I appreciate the clarifications provided in the rebuttal, I still find the contribution to be relatively modest for a purely theoretical result. It is uncommon for a theory paper to present a complete proof within the main text, which suggests a limited depth. Although the significance remains somewhat unclear in my view, the rebuttal has helped to improve my assessment in that regard. I would strongly recommend that the authors include more context regarding the practical implications (or potential practical implications) of their work. Claims And Evidence: N/A, the work is purely theoretical. Methods And Evaluation Criteria: N/A, the work is purely theoretical. Theoretical Claims: I checked the proofs Theorems 2.1 and 2.2 somewhat carefully and did not find any significant issues although I admittedly did not do a 100% assiduous check of these proofs, e.g., check the references or double check every single step of algebra. The proof techniques are very standard, in a setting that isn't terribly delicate, and the general techniques and results align well with what one would expect in this setting. That being said, I think there may be a few (non-critical) issues. * I think there may be a small error on l. 188 right. Should "$\alpha_1(x) = 1$" instead be "$\alpha_1(x) = x$"? For clarity I would just write $\alpha_1(x_1) = x_1$. My understanding is that we want to write $P(\cup E_i)$ as the $P(E_1) + P(E_1^C\cap E_2) + P(E_1^C \cap E_2^C \cap E_3) + \cdots$, where we replace the intersects with products due to independence, in which case we want $\alpha_1(P(E_1)) = P(E_1)$. * Line 197 left: I agree that this assumption should be nonproblematic, but I think this should be dealt with rigorously. * $\mu^{(k,n)}$ is defined three sentences later. Please introduce this before or immediately after using it. Experimental Designs Or Analyses: N/A Supplementary Material: No. The supplement only contained code. Relation To Broader Scientific Literature: This work is an extension of a line of research that was started very recently on estimating densities on infinite bit strings, where all the bits are independent. Judging from the recent research this is, surprisingly, a setting that has only been explored recently. Beyond this it is difficult for me to contextualize this more. This work seems to be a natural extension of discrete density estimation to an alphabet that is uncountable. The rates achieved match the well known $1/\sqrt{n}$ rate for estimation over a finite or countable alphabet. Naturally we need some regularity conditions (functions T and S) to make this problem tractable. This work relaxes the regularity condition and shows that a previous work was minimax optimal, which is always nice to know. Essential References Not Discussed: This seemed fine to me. Other Strengths And Weaknesses: There are a few issues with this work: - While I personally appreciate fundamental questions about learnability, I don’t think this topic is a good fit for ICML. This paper extends work from a previous COLT paper, which seems like a more appropriate venue. The broader significance for the ML community is quite limited—I struggle to see any practical applications or significant implications beyond a purely academic exploration of density estimation. Theory papers at ICML should either offer insights relevant to practitioners or shed light on important settings, and this work does neither. - The proofs are fairly standard. If the paper introduced a genuinely novel technique or a powerful new idea, it might justify acceptance despite its lack of practical impact. However, everything here looks quite routine to me. Other Comments Or Suggestions: - l. 44 left: Fix aligment with $\mathbb{E} X^{(1)}$. - l. 121 left: "morally" sounds very strange to me here. -l. 144 right vs 158 right: I think I'm just missing an implication of your results but it see "implying that the decay condition of Theorems 2.1 is necessary." and "Two natural directions for future study are extensions of Theorems 2.1 and 2.3. For the former, it is likely that the conditions on P are too stringent and can be significantly relaxed; in particular, requiring that P be decaying is quite probably unnecessary." Which seem to contradict each other. Maybe this should be elaborated on a bit. I don’t see any major issues with the work itself, but the results seem like such a poor fit for ICML that I was considering giving it a 1 (Reject). As someone who regularly submits highly theoretical papers to major ML conferences, I would expect this work to receive virtually no attention in this venue. It would likely have much more impact at COLT or another more topically appropriate venue, such as ITA. Questions For Authors: My main question is: why should the broader ML community be interested in these results? Does this type of estimation problem arise in any important setting? From the name, I assume there’s some connection to CDF estimation, but the link isn’t clear to me. I’m actually surprised by how little motivation is provided, especially for an ICML paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough evaluation. We address your concerns below: 1. Relevance and Venue Suitability We recognize that our work is theoretical. However, uniform convergence in the Local Glivenko-Cantelli sense is fundamental to learning theory, informing both risk bounds and distribution‐estimation methods. While we considered submitting to a more theory‐focused venue such as COLT, we also believe there is an audience at ICML that values rigorous foundational work in ML. We can point to dozens of papers published in ICML at a comparable level of theoretical abstraction. 2. Minor Theoretical / Textual Points - Line 188 (right): We agree that $\alpha_1(x)$ should be $\alpha_1(x)=x$. - Line 197 (left): We will justify the assumption more rigorously. - Definition Timing: We will introduce $\mu^{(k,n)}$ earlier for clarity. - Line 44 (left): We will correct the alignment with $\mathbb{E}X^{(1)}$. - Line 121 (left): We will replace “morally” with a more precise term. - Lines 144 (right) vs. 158 (right): We acknowledge the apparent contradiction concerning the necessity versus the stringency of the decay condition. We do believe the condition can be replaced with weaker assumptions in certain scenarios, and we will clarify this nuance. 3. Broader Motivation and Applications Although the problem may appear standalone, it initially arose from analyzing practical algorithms, as detailed in the Local Glivenko-Cantelli paper by Cohen & Kontorovich (2023), which stems from the multiplicative weights method. Additional ties to CDF estimation are found in the work of Blanchard & Voráček (2024), where a modified Dvoretzky-Kiefer-Wolfowitz (DKW) approach is employed. Given the inherent space limitations of a conference paper, we chose to prioritize presenting novel results over a recapitulation of well-established motivation in recent literature. That said, we take this critique to heart and will expand upon the motivation -- including genuinely "applied" ramifications -- in the revision. In particular, we will delve into the original motivation behind the LGC question: Multiplicative Weights algorithm for online learning or solving a repeated zero-sum game, with a distribution-dependent, dimension-free dependence on the number of "experts". We appreciate your feedback and hope these clarifications address your concerns while underscoring both the importance and rigor of our contributions. --- Rebuttal Comment 1.1: Comment: Thank you for you rebuttal. I am inclined inclined to maintain my score. I agree that this is a reasonably established line of work, but I don't feel comfortable asserting that this is a significant result in the context of ICML. Looking at the "Local Glivenko-Cantelli paper by Cohen & Kontorovich (2023)" the significance is still not clear to me (the paper contains no mention of the multiplicative weights method). Again I am still somewhat on the fence about this, but it still seems as though the authors are not capable of concretely and clearly describing why the findings in the paper are significant to the general ML community: e.g., clearly describing the learning problem and method for which this gives some interesting or useful implications. Looking at the other reviews it seems like the other reviewers don't really know why these results are significant either, and in light of that I lean towards reject. --- Reply to Comment 1.1.1: Comment: Clearly, for the ML community, the importance of high-dimensional mean estimation is undisputed. Any method that replaces the worst-case dependence on the dimension by a much more refined dependence on the distribution can considerably sharpen the bounds. Consider, for example, the original problem statement motivating the line of results on LGC, originating in this cstheory post: https://cstheory.stackexchange.com/questions/42009/is-uniform-convergence-faster-for-low-entropy-distributions In systems with many experts, bounds with worse-case dependence on the dimension (i.e., the number of experts) are useless when we sample from experts whose expertise adapts dynamically (the dynamics of the multiplicative weight updates mentioned in the post is one such example). Our results show that when the algorithm progresses and the distribution becomes less entropic, less samples are needed to ensure uniform estimation of the experts' expertise, and estimating with the empirical mean is optimal. We fully intend to pursue this and other applications in an active current line of research. The thrust of the present results is to examine the optimality of the empirical mean estimator for this problem. Insofar as the original motivation is compelling, we argue that understanding the optimal estimator for it is equally important.
Summary: In the local Glivenko-Cantelli setting, one seeks to learn an unknown distribution $\mu$ over $\{0,1\}^\mathbb{N}$ from samples. In the case where $\mu$ is a product measure, as considered in this paper, it is fully described by a vector $p \in [0,1]^{\mathbb{N}}$. One natural estimate for $p$ is the empirical mean. In previous work, the performance of this estimator (in terms of the expected $\ell_\infty$ error) was tightly characterized. In particular, the class LGC (local Glivenko-Cantelli) of those $p$ for which this error vanishes as the sample size $n$ grows is tightly characterized. This work asks how this landscape changes if one allows alternative estimators (the answer: not too much). In particular, if a class $\mathcal{P}$ satisfying some rather mild regularity conditions is learnable by any estimator, they prove that $\mathcal{P}$ is essentially a subset of LGC (up to a minor, natural tweak to this class). They also provide counter examples showing that these regularity conditions cannot be fully removed (although there are some interesting questions about relaxing them a bit). Moreover, they prove a minimax lower bound showing that the risk bound achieved by the empirical mean estimator cannot be improved too much (for any estimator). ## Update after Rebuttal I maintain my positive score. Claims And Evidence: The proof of Thm 2.1 uses some careful applications of Neyman-Pearson and a lemma from a previous paper in this area. I did not dig into the previous proof of that lemma, but the argument otherwise looks sound. The proof of Thm 2.2 begins with a natural construction and application of Fano's inequality. However, the exact tuning of the parameters requires a lot of care, and I think this could be better explained. I followed the individual steps of the proof, and, assuming that cited results are true, it appears sound. However, I would feel more comfortable vouching for correctness if there was more high-level discussion about parameter tuning. I think there are some easily fixed issues with the proof of Thm 2.3, which I mention below. But I am sure the result is true. Proposition 2.4 has a direct and complete proof. Methods And Evaluation Criteria: n/a Theoretical Claims: Yes, as discussed above. In the proof of Thm 2.3, the estimator defined at Step 3 should involve a sum over $\hat{p}_n(i)$ rather than $\hat{p}_n(j)$, I believe. Moreover the discussion at Step 4 is of convergence for fixed $j$, when the question is really about uniform convergence over $j$. Without the need for a uniform guarantee there is no need to use the alternate estimator. Of course, it is clear that the authors understand this, and it is easy to fix. Experimental Designs Or Analyses: There are a few supporting plots in the Appendix. At a skim, they look good (but even if there were issues I do not think they play a critical role). Supplementary Material: n/a Relation To Broader Scientific Literature: The study of the local Glivenko Cantelli class was initiated pretty recently and this paper cites the relevant work that I know if, though I am not an expert on this specific problem. The existing work focused on the empirical mean estimator, whereas this work considers the limits of learnability using any estimator. Essential References Not Discussed: n/a Other Strengths And Weaknesses: The paper is generally well written and nice to read. As someone familiar with the methods in this work and some of the earlier results on LGC, I have a bit of a hard time assessing significance of this paper's results. Are there any interesting applications of LGC to other statistical / learning theoretic problems? Also, I am not viewing any of the proof techniques as primary contributions in their own right. If there is any major technical difficulty to one of the proofs that would rule out simpler approaches, this should be highlighted. The statement of Thm 2.2 is a bit funny. It seems that first string of inequalities on $n,s,t$ can be removed, since they appear again right afterwards. I think some further discussion is warranted on the gap between this LB and the UB. I also think the proof could use some more high-level discussion, as mentioned above. Other Comments Or Suggestions: On page 4, bottom right, there are some missing indices from the $p^{(Y)}$ vector on the second and third lines. On page 7, in the first equation block of Step 1, $B_j$ should be $E_j$. Questions For Authors: Suppose I am interested in mean estimation, rather than distribution estimation. What does known estimation landscape look like there beyond product distributions, and would any of your results translate? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive review. We appreciate your insights and address your key concerns below: 1. Parameter Tuning and Proof Clarity (Theorems 2.2 and 2.3): We recognize that parameter tuning in Theorem 2.2 is indeed the main challenge. We intend to offer clearer intuition on how these parameters are selected and why they yield the stated minimax bound. In Theorem 2.3, we will correct the minor issues concerning the estimator’s definition at Step 3 (e.g., using $\hat{p}^n(i)$ instead of $\hat{p}^n(j)$) and refine the discussion on uniform convergence to avoid confusion regarding a fixed $j$. 2. Significance and Applications: We understand your request for a broader context of potential applications of LGC. While our primary focus is the Local Glivenko-Cantelli framework, we will highlight possible connections to mean estimation in correlated or more complex settings. Where relevant, we will also discuss why simpler approaches do not suffice, underscoring the nontriviality of our techniques. 3. Minor Corrections and Notational Consistency: We will fix the missing indices in the expression for $p^{(Y)}$ (page 4, bottom right) and replace $B_j$ with $E_j$ on page 7 (Step 1), ensuring accuracy and consistency throughout. 4. Beyond Product Distributions: As for extending these findings to mean estimation in non‐product settings, note that $\ell_\infty$ mean estimation is not widely studied beyond the references cited in our introduction, and we are not aware of additional relevant work. We appreciate your thoughtful review and hope these clarifications address your concerns.
Summary: The paper discusses the "Local Glivenko Cantelli" problem focusing on families of product distributions. There are three main results: The first main theorem (Theorem 2.1) argues that LGC (the family of product measures that is learnable by the empirical mean estimator EME) is the largest family learnable by any fixed estimator, by showing any family of product measures that decays, is strongly symmetric about 1/2, and is learnable belongs to LGC. The second main theorem (Theorem 2.2) argues that EME is nearly minimax-optimal by establishing a minimax lower bound for any estimator based on n i.i.d. samples from the latent distribution and comparing it with established bounds from results in the literature. The third main theorem (Theorem 2.3) shows that LGC can be expanded if the learner can exploit structural knowledge of the problem. In these scenarios, some assumptions about decay and symmetry may be relaxed. Claims And Evidence: The paper is mostly theoretical, and the claims are clear. There are some questions about the proofs that are stated in a later section of the review. The paper explains the main results clearly and provides proof for each of the main theorems. Methods And Evaluation Criteria: Since the paper is mainly theoretical, there are limited discussions about empirical methods or evaluation criteria. Some simulations are provided for the EME and the "simple average estimator", showing that the latter performs better, while both illustrate tightness of minimax bounds presented in Theorem 2.2. Theoretical Claims: I read the proofs for all three theorems but did not check all proofs rigorously. Some questions are listed below in the "questions for authors" section. Experimental Designs Or Analyses: There are not many experiments provided in the paper. It would be helpful to give a description of the simple average estimator, which was not included in the paper although it was used extensively to compare against the EME. Supplementary Material: The supplementary material contains simulations. I read over them but did not check rigorously. Relation To Broader Scientific Literature: Based on the descriptions in the paper, there were multiple related problem settings in the literature that have been explored with fruitful results. This paper identifies a new direction but borrows many tools and techniques from the literature. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Overall, I found the paper to be a nice read. The authors did a great job describing the problem setting and presenting the main results. One small concern is that the main results are quite short -- descriptions, main results, and discussions were mostly within the first three pages, while the remaining main text were dedicated to proofs of main theorems. Considering there have been multiple papers in the literature on LGC, I think it would be helpful if the paper contains a section highlighting novelties in proof techniques that this paper introduces. It would also be great if the paper could contain some discussions of applications or implications to real-world scenarios. Other Comments Or Suggestions: One minor typo: 1. Line 145 on the right side: "Theorems 2.1". Questions For Authors: I have some questions related to proofs: **Proof of Theorem 2.1** 1. I did not fully understand why the assumption $\dot{p}_j^*\in[0,1/4]^\mathbb{N}$ incurs no loss of generality, especially when it is used to lower bound the minimax risk later. 2. It might be worthwhile to elaborate what optimality criteria the Neyman-Pearson lemma gives to an estimator. For instance, how do we guarantee the optimal estimator $\hat{y}$ that minimizes the posterior probability of error also minimizes the expected $L_\infty$ norm? It would be helpful to clarify these since the argument of the contradiction builds heavily on analysis of the posterior probability. **Proof of Theorem 2.2** 3. In line 260 on the left side, why is $\|\|p^{(k)}-p^{(\ell)}\|\|_\infty = |q'-q|$? Are we missing an assumption that $q'\geq 2q$? A similar statement is also used later around line 283-284 on the left side. **Proof of Theorem 2.3** 4. The definition of the test function is a bit ambiguous. Is it safe to presume $B_j$ in line 355-356 on the right side should be $E_j$? 5. It would be helpful to provide statements referenced from literature. For instance, in Step 2., Lemma 3 of Cohen & Kontorovich 2023 are repeatedly referenced. It would be helpful to state the lemma. Ethical Review Concerns: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your detailed and insightful evaluation and for your positive comments regarding the clarity of our main results and the rigor of our proofs. Below, we address your specific concerns: 1. Regarding the assumption $\dot{p}_j^*\in[0,1/4]^\mathbb{N}$, note that we can disregard any values where $\dot{p}_j > 1/4$ without loss of generality. In fact, excluding these values only serves to decrease the supremum in our minimax analysis, which we have shown is lower bounded by 1/4. We will clarify this point further. 2. We acknowledge the importance of the optimality criteria provided by the Neyman-Pearson lemma. In our current draft, we refer to the lemma without additional discussion; we will expand on its role to clarify how minimizing the posterior error probability aligns with minimizing the expected $L_{\infty}$ norm. 3. Concerning the inequality in Theorem 2.2, there is no need to assume $q'\geq 2q$. The equality $\|p^{(k)}-p^{(\ell)}\|_\infty = |q'-q|$ follows directly from our construction, as it represents the maximum coordinate-wise difference between $p^{(k)}$ and $p^{(\ell)}$. 4. We agree that the notation is inconsistent; $B_j$ should indeed be $E_j$, and this will be corrected. 5. We will also include an explicit statement of the referenced lemma from Cohen & Kontorovich (2023) to ensure clarity. Additionally, we appreciate your suggestion to provide a description of the simple average estimator used in our simulations. Finally, we note your inquiry regarding the broader significance of our work; our study builds upon a well-established framework with direct implications in learning theory, and the technical challenges we address are of independent interest. We thank you again for your constructive feedback.
Summary: This paper investigates the mean estimation problem in the binomial empirical process. First, under mild technical conditions, it establishes that the LGC class, as defined in Cohen & Kontorovich (2023), is the largest class that is learnable by any estimator. Furthermore, it demonstrates that the empirical mean estimator (EME) achieves the minimax optimal rate. Finally, the paper provides examples of learnable classes that do not require a decaying assumption, broadening the scope of learnability beyond previously studied settings. Overall, this work offers a more comprehensive understanding of mean estimation in the binomial empirical process. Claims And Evidence: Yes. The claim is clear and well-supported by theorems and proofs. Methods And Evaluation Criteria: Does not apply. Theoretical Claims: This is a theory paper. While I did not verify every detail of the proofs, they appear sound at a glance. The contribution is solid. Experimental Designs Or Analyses: Does not apply. Supplementary Material: No supplementary materials. Relation To Broader Scientific Literature: The problem is a bit standalone and a bit disconnected from broader machine learning literature. Essential References Not Discussed: No Other Strengths And Weaknesses: The paper is well-written, and the contribution is solid. The main limitation is that the problem is somewhat standalone and lacks strong connections to broader machine-learning topics. The paper could be strengthened by exploring its relationship with general empirical process theory and highlighting its relevance. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough and encouraging evaluation. We appreciate your recognition of the soundness and clarity of our theoretical contributions and proofs. Regarding the observation that the problem might appear somewhat standalone, we would like to emphasize that the original LGC paper by Cohen & Kontorovich (2023) provides a detailed discussion on its broader connections to machine learning, notably illustrating how the problem emerged from an analysis of the multiplicative weights algorithm. Uniform convergence in the Glivenko-Cantelli sense remains a cornerstone for deriving generalization bounds, which underscores the relevance of our investigation to the wider ML community. > The main limitation is that the problem is somewhat standalone and lacks strong connections to broader machine-learning topics. The paper could be strengthened by exploring its relationship with general empirical process theory and highlighting its relevance. In the revision, we will elaborate upon the motivation and better situate the paper in the context of recent developments.
Summary: This paper focus on the Local Glivenko Cantelli setting, which studies the uniform convergence rates of Empirical Mean Estimator (EME). Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: No. Supplementary Material: No. Relation To Broader Scientific Literature: It should be worthwhile to focus on the key questions without introducing much technicality in the introduction section to make the topic more appealing to the broader scientific literature. Essential References Not Discussed: No. Other Strengths And Weaknesses: While I appreaciate the technical rigor, I believe the main paper should focus on the intuition and logical development instead of technical proofs. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We fully agree that presenting the key questions in an intuitive way is valuable. In our work, we carefully balanced the need to convey the underlying ideas with the necessity of rigorous technical proofs, given the inherent complexity of the Local Glivenko-Cantelli setting. We did strive to emphasize the logical development and core insights, but simplifying the exposition further proved challenging without compromising the correctness of our results. > While I appreaciate the technical rigor, I believe the main paper should focus on the intuition and logical development instead of technical proofs. In the revision, we will be happy to expand upon the intuition and logical development.
null
null
null
null
Emergence in non-neural models: grokking modular arithmetic via average gradient outer product
Accept (oral)
Summary: This paper investigates grokking in the common scenario of modular arithmetic. In contrast to previous approaches, the authors show that grokking occurs when using the RFM learning algorithm. This enables them to isolate the feature learning as the source of grokking in these setups. By defining two progress measures, they demonstrate that the feature learning dynamics do not exhibit two distinct phases. Finally, they compare their results to the traditional NN approach and show that their findings are consistent with previous attempts. Claims And Evidence: Their claims are sufficiently supported by comprehensive numerical experiments. Methods And Evaluation Criteria: The methods and evaluations are fine. Theoretical Claims: The paper mainly contains observations that are supported by strong numerical evidence. Experimental Designs Or Analyses: All of the experiments look valid. Supplementary Material: I also briefly looked into the appendix, which contains additional numerical experiments and derivations. Relation To Broader Scientific Literature: This paper can help to figure out the mechanism behind the well-known phenomena of grokking in modular addition, which could advance our understanding of the interplay between training and testing dynamics. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** - This paper shows that grokking in modular arithmetic can also occur using RFM, which is a novel contribution. - This approach allows the authors to isolate feature learning as the central mechanism of grokking in this setup. - They demonstrate that generalization requires a circulant structure in the feature matrix, which appears important for understanding modular arithmetic tasks. - The authors define two progress measures that, unlike test accuracy, exhibit a linear change even in the early stages of training. This is significant as it suggests that there is no real phase transition during feature learning. I also found the discussion of the required a priori knowledge interesting. - They compare their findings to the modular arithmetic NN approach (Gromov, etc.), showing that it relies on the same principles and that the new progress measures could also be applied there. - The paper is very well-written. **Weaknesses:** - I felt that some details are still unclear and may need further investigation: - What is the fundamental mechanism behind the delayed generalization observed here? In other words, why does generalization accuracy behave so differently from the continuous progress measure? It appears that the jump in accuracy occurs when the continuous progress measure saturates. Is this consistent? Have you identified a specific threshold in the progress measures? - Generalization does not occur below a certain threshold of the training data fraction. What is the interplay between this threshold and your results? - Finally, since the paper focuses on the specific task of modular arithmetic, the scope of the results is naturally limited. Overall, the paper is well-written and contains important insights, so I believe it should be accepted. Other Comments Or Suggestions: I believe that further investigation of the behaviour of the progress measures for different ratios could be beneficial. For example, I could be interesting to see the AGOP alignment (of the current iteration versus the final iteration) and the circulant deviation at several different ratios (below and above the critical threshold). Fig. 10 is also interesting in this context: the AGOP still changes quite a lot from fraction 40% to 45%, while the test accuracy is already 1 in both of them. It could be interesting to see this graph for more fractions (larger than 45) and also for the circulant deviation measure. Typos: 1. Line 70: "and and test accuracy remain at the constant" 2. Around Eq. (6), $\alpha$ probably should be $s$ (by the way, a brief justification for the choice $s=1/2$ would be nice). Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewers for their detailed feedback. We address their questions and comments below. >I felt that some details are still unclear and may need further investigation: What is the fundamental mechanism behind the delayed generalization observed here? In other words, why does generalization accuracy behave so differently from the continuous progress measure? There is a sharp transition in feature alignment. Once the features are aligned beyond a certain threshold, the accuracy (and loss) both improve sharply. Thus the distinction is not between continuous or discrete measures but between measures of loss and measures of feature alignment. Why there is a discrepancy between our progress measures and accuracy / loss and formalizing the mechanism behind the delayed generalization is an important question. A simplified direction to understand this process theoretically would be to analyze how random circulant features enable generalization on modular arithmetic. >It appears that the jump in accuracy occurs when the continuous progress measure saturates. Is this consistent? Have you identified a specific threshold in the progress measures? We have not observed a direct relationship between the jump in accuracy and saturation of progress measures. While it might look like it happens in Figure 2, it is hard to visually evaluate feature improvements as the alignment is close to 1 and the curve looks flat even as alignment continues to improve. >Generalization does not occur below a certain threshold of the training data fraction. What is the interplay between this threshold and your results? Both training iterations and number of samples will improve feature quality. Thus, as generalization error is a sharp function of this quality, we see sharp transitions with respect to both. >Finally, since the paper focuses on the specific task of modular arithmetic, the scope of the results is naturally limited. Like prior works (e.g. [1,2]), we focus on the setting of modular arithmetic because (1) these tasks clearly exhibit the sharp transition from trivial-to-perfect generalization, and (2) there exist analytic solutions to the task that we can compare the learned algorithms to. In particular, there is substantial evidence that the algorithm implemented by neural networks is the Fourier Multiplication Algorithm[2,3]. Moreover, it appears that non-parametric methods without the ability to learn features are unable to learn modular arithmetic, hence these tasks are a strong test-bed for the predictor power of feature learning through AGOP. [1] Power et al., Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets, Mathematical Reasoning in General Artificial Intelligence Workshop, ICLR 2021. [2] Nanda et al., Progress measures for grokking via mechanistic interpretability, ICLR, 2023. [3] Zhong, et al. "The clock and the pizza: Two stories in mechanistic explanation of neural networks." NeurIPS 2023. >I believe that further investigation of the behaviour of the progress measures for different ratios could be beneficial. For example, I could be interesting to see the AGOP alignment (of the current iteration versus the final iteration) and the circulant deviation at several different ratios (below and above the critical threshold). Fig. 10 is also interesting in this context: the AGOP still changes quite a lot from fraction 40% to 45%, while the test accuracy is already 1 in both of them. It could be interesting to see this graph for more fractions (larger than 45) and also for the circulant deviation measure. Please find a new figure here showing circulant deviation and AGOP alignment evolution over time for modular addition at training fractions 5%, 15% 25%, 35%, 45%, 55%, …, 95%: https://ibb.co/ytKK0p5. As the reviewer points out, we observe that the AGOP alignment can continue to increase after test accuracy is at 100%. We provide another new plot showing that the circulant deviation at the final iteration rapidly decreases with training size, and is very close to 0 for training fractions larger than 55%: https://ibb.co/9Hp1F3yr. >Typos: Line 70: "and and test accuracy remain at the constant" Around Eq. (6), $\alpha$ probably should be $s$ (by the way, a brief justification for the choice $s=1/2$ would be nice). Thank you for noticing this, we will fix this typo in our revision. The choice of s=½ is motivated by the case of two-layer linear networks where the NFA is exact with this exponent [1]. [1] Radhakrishnan, Belkin, Drusvyatskiy. “Linear Recursive Feature Machines provably recover low-rank matrices”, PNAS 2025. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. I would like to maintain my positive score.
Summary: This paper studies the phenomenology of learning with recursive feature machines for modular addition, subtraction, multiplication, and division. Modular addition has become a standard setting for studying grokking as an example algorithmic task and has been studied for learning with neural networks, typically with quadratic activation functions. This paper learns the task with recursive feature machines that exhibit similarities with feature learning in neural networks. The paper also gives explicit constructions by transforming the unit vectors by circulant matrices which is then solved with a generalizing solution by quadratic kernel regression. Claims And Evidence: Claims are supported by clear and convincing numerical simulations and intuitive explanations. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes ( but not in detail). There is one Theorem in the main paper that shows that the quadratic kernel learns features after transforming the data by discrete Fourier transform. This Theorem is informally stated in the main paper and a more detailed version is given in the Appendix. Experimental Designs Or Analyses: Yes. All the Figures in the main are good. Supplementary Material: I just skimmed Appendix G. Relation To Broader Scientific Literature: To what extent does learning with recursive feature machines correlate with feature learning of neural networks is an active research area. Also, it is an open question whether this can be shown mathematically in simple settings. Our fundamental understanding of feature learning in neural networks is so far limited to learning in infinite-width in a mean-field regime or learning multi-index models. This paper contributes to understanding feature learning for modular addition (and so on) task. Essential References Not Discussed: references are good Other Strengths And Weaknesses: **Strengths:** Experiments are thorough and explanations are insightful. It was already known that neural networks learn Fourier Multiplication Alg. This recursive kernel learning gives a more transparent framework for understanding grokking. Kernel Machine without feature learning also achieves zero training error but does not generalize but the learning features with RFM (=data transformation) gives a generalizing solution. That's neat ! Applying the (A) correct transformation to data and kernel machine is better than (B) learning with kernel machines which is quite interesting (also, whether (A) beats neural networks). **Weaknesses:** * line 359--361 (right): already said before * Theorem 5.1 could come earlier in the paper and also be written formally as done in Appendix * "well-known results for multiplicative group" may not be well known to ML audience. It would be better to state these group theory references formally in math symbols instead of words (and add refs). Other Comments Or Suggestions: Suggestions * Please use the standard definition of Jacobian without transpose in Def 2.1 and write G = Jf ^T Jf to avoid confusion. Questions For Authors: Why is AGOP measured by vectorizing the matrices? Why not $ \text{alignment}(A, B) = \frac{\text{Tr}(A^T B) }{\| A \| \| B \|} $ I believe this is the standard way to measure the alignment of matrices AND it would capture similarities of eigenvectors, unlike the vectorized alignment. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their in-depth feedback on our submission. We will address their comments and questions here. >Theorem 5.1 could come earlier in the paper and also be written formally as done in Appendix "well-known results for multiplicative group" may not be well known to ML audience. It would be better to state these group theory references formally in math symbols instead of words (and add refs). Thank you for this feedback. We will include the Theorem formally in the main text in the updated manuscript, and more clearly reference Appendix E where we discuss the re-ordering. >Please use the standard definition of Jacobian without transpose in Def 2.1 and write G = Jf ^T Jf to avoid confusion. Thank you for this suggestion, we will use the standard definition to avoid confusion. >Why is AGOP measured by vectorizing the matrices? Why not $ \text{alignment}(A, B) = \frac{\text{Tr}(A^T B) }{| A | | B |} $. I believe this is the standard way to measure the alignment of matrices AND it would capture similarities of eigenvectors, unlike the vectorized alignment. In fact $\text{Tr}(A^T B)$ equals the inner product of the vectorized A and B, and $||A||,||B||$ are equivalent to the norms of their vectorizations, thus our formulation is indeed a standard measure of matrix similarity.
Summary: This paper studies the grokking phenomenon in modular arithmetic. The main idea is that grokking happens because models slowly learn the right features, not because of any specific neural network architecture or gradient-based optimization. The authors use Recursive Feature Machines (RFMs) to show that even when training loss is zero from the start, the features evolve gradually until they reach a structure that allows generalization. In their case, the key structure is a block-circulant feature matrix that connects to the Fourier Multiplication Algorithm, which exactly solves the modular arithmetic task. Claims And Evidence: The authors claim that grokking is not specific to neural networks and is not tied to gradient-based methods. They show this by demonstrating grokking in RFMs, where the training loss is zero but the model only generalizes after many iterations when the features are more fleshed out. They support this with evidence such as gradual improvements in Circulant Deviation and AGOP Alignment, measures that capture how close the features are to the ideal block-circulant structure. Methods And Evaluation Criteria: The paper uses a mix of theory and experiments on modular arithmetic tasks like addition, subtraction, multiplication, and division. The models are evaluated not only on standard metrics like test accuracy and test loss but also on new progress measures that track feature quality over time. The experimental design compares RFMs with neural networks and examines how features evolve during training despite the immediate reduction in training loss. Theoretical Claims: On the theory side, the paper argues that the key to grokking is the emergence of a block-circulant feature matrix. They show that when the learned feature matrix attains this specific structure, the model effectively implements the Fourier Multiplication Algorithm. A central result (Theorem 5.1) proves that a kernel machine with a quadratic kernel and a block-circulant Mahalanobis matrix will compute modular arithmetic operations exactly as the Fourier Multiplication Algorithm does. The math leverages the fact that circulant matrices are diagonalizable using the Discrete Fourier Transform, linking the learned feature structure directly to the generalization ability. Experimental Designs Or Analyses: The experimental design centers on training RFMs on modular arithmetic tasks. Despite perfect training loss from the outset, the test accuracy remains near chance level for many iterations. The authors analyze how the feature matrix, particularly the AGOP, evolves over time. They introduce and track two progress measures (Circulant Deviation and AGOP Alignment) to show that the feature structure gradually becomes more organized. They also include experiments with neural networks and tests using random circulant transformations to further support the claim that the emergence of the correct feature structure is what drives grokking. Supplementary Material: The supplementary material includes detailed proofs of the main theoretical results, such as the diagonalization of circulant matrices and the formal proof of Theorem 5.1. Additional appendices explain experimental details like the reordering of feature matrices using the discrete logarithm, extra experiments on multi-task grokking, and studies on enforcing circulant structure during training. This extra material helps to reinforce and elaborate on the main claims of the paper. Relation To Broader Scientific Literature: This work builds on the growing literature on grokking, which has mainly focused on neural networks and delayed generalization. It relates to prior studies that view grokking as a transition from a lazy memorization regime to one where rich, useful features are learned. The paper connects these ideas with the concept of feature learning in kernel methods and neural networks and also ties into Fourier-based methods for solving modular arithmetic. In doing so, it challenges the idea that grokking is solely a function of network architecture or gradient-based optimization. It's an interesting addition to papers like the one from 2024 "Grokking as the transition from lazy to rich training dynamics" which highlight the important of optimization dynamics and architecture vs this one which highlights that feature learning is more of the key driver. Essential References Not Discussed: While the paper is well-grounded in the literature on grokking and feature learning, it could benefit from more discussion of works on implicit bias in optimization and how these biases shape feature learning. Other Strengths And Weaknesses: A clear strength of the paper is its novel approach to isolating feature learning from the usual gradient-based optimization by using RFMs. The theoretical connection between block-circulant features and the Fourier Multiplication Algorithm is super cool and interesting. On the downside, the experiments are limited to modular arithmetic tasks, so it is uncertain how well these insights will carry over to more complex, real world dynamics. Also, the paper focuses heavily on RFMs, which may limit how easily it can be generalized to other architectures. Other Comments Or Suggestions: No comments Questions For Authors: Have you tested your approach on any tasks beyond modular arithmetic to see if the delayed feature evolution and grokking phenomenon persist? How sensitive are your progress measures to different kernel choices or initializations in RFMs? Can you provide more insights into how the reordering using the discrete logarithm affects the observed feature structure? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments and thorough feedback. We will answer their questions below. >While the paper is well-grounded in the literature on grokking and feature learning, it could benefit from more discussion of works on implicit bias in optimization and how these biases shape feature learning. We would appreciate specific references that the reviewer feels are relevant. We would be happy to extend the discussion to include them. >On the downside, the experiments are limited to modular arithmetic tasks, so it is uncertain how well these insights will carry over to more complex, real world dynamics. Also, the paper focuses heavily on RFMs, which may limit how easily it can be generalized to other architectures. Applying RFM is crucial for establishing that (1) grokking is not tied to neural networks, as it is a non-neural model, (2) it is not tied to gradient based optimization or training loss, and (3) can be induced solely through feature learning. Further, RFM allows us to isolate the ability of feature learning through AGOP to induce generalization in surprising settings. As mentioned in another response, there is also substantial evidence that the algorithm implemented by neural networks is the Fourier Multiplication Algorithm. RFM being able to recover the FMA (in our theoretical setting) and the features learned by neural networks empirically suggest that feature learning through AGOP is a useful lens to study neural networks. This is especially true as non-parametric methods without the AGOP appear unable to learn modular arithmetic tasks. >Have you tested your approach on any tasks beyond modular arithmetic to see if the delayed feature evolution and grokking phenomenon persist? We have ongoing results where grokking occurs with RFM for other algorithmic tasks including sparse parities, the Chinese remainder theorem number representations, and certain group structures. >How sensitive are your progress measures to different kernel choices or initializations in RFMs? The choice of kernel can affect the rate at which grokking occurs in RFM or whether it happens at all. Quadratic and Gaussian kernels are suited for modular arithmetic due to the target function being quadratic in Fourier space, while RFM with the Laplace kernel does not appear to generalize. In all of our experiments, we use the default initialization for RFM with the identity matrix. >Can you provide more insights into how the reordering using the discrete logarithm affects the observed feature structure? Re-ordering by the discrete logarithm only affects the visualization of the features, as RFM is invariant to re-ordering of the input coordinates. We re-order coordinates for multiplication and division tasks in order to visualize the circulant structure, which is otherwise obscured. The features are circulant after re-ordering because modular multiplication/division (excluding zero element) are equivalent to addition/subtraction after transformation with the discrete logarithm. Note that it is necessary to know the structure of that re-ordering in order to invert it and recover the “underlying algorithm”.
Summary: The paper focuses on "grokking" -- a phenomenon that has attracted a lot of interest recently mostly because it is not what the ML community was used to observe regarding training dynamics and generalization. The paper's main point seems to be that grokking is not specific to neural networks and to SGD training. Instead, the authors show that grokking is also observed with Recursive Feature Matrices (RFMs) -- when trained on modular arithmetic tasks. Another main point is what drives generalization is the learning of structured feature representations -- and the paper proposes a couple of metrics (such as "circulant deviation") to show this gradual progress of the model towards generalization. Claims And Evidence: I find the paper a bit confusing on its "positioning" with respect to prior work in this research area. First, the paper focuses exclusively on modular arithmetic -- but grokking is a more general phenomenon and it has been observed also in language models among others. This has to be clarified somehow: even though the paper helps us better understand the process of learning structured features in a very specific task that relates to modulo arithmetic, its claims and evidence cannot be generalized beyond such tasks. Methods And Evaluation Criteria: I like how the paper uses both mathematical insights and experimental results -- and overall I found it to be rigorous, with clear definitions, methods, and metrics. Theoretical Claims: The most important mathematical result of the paper is (in my own words): a good solution for the modulo arithmetic problem is the Fourier multiplication algo (FMA). The paper shows that circulant matrices allow kernel machines to solve modulo arithmetic using FMA. Experimental Designs Or Analyses: The paper is quite strong in terms of experimental design and analysis -- showing several results. Some of the empirical claims that I found more interesting are: a) that even RFMs can show grokking behavior, b) that a metric such as AGOP shows gradual improvement during learning -- as opposed to accuracy, c) that circular features solve the modular arithmetic problem using FMA. Supplementary Material: I admit that I went through the supp material superficially. Relation To Broader Scientific Literature: The paper has connections with broader questions related to learning theory, emergent phenomena in learning, and foundations of AI. Essential References Not Discussed: None that I know of. Other Strengths And Weaknesses: - The paper certainly contributes in the fundamental understanding of grokking -- but it does not explain some key characteristics of grokking (such as, what determines how many long it will take, say in epochs, until the model learns those structured features?) - And as previously mentioned, the paper is quite narrow -- focusing only on modular arithmetic. Other Comments Or Suggestions: I think that the paper can be improved if the authors find a way to describe more clearly, and from the first section of the paper, what is the key "take home message" of this study. Questions For Authors: - is there an info theoretic way to show why circulant matrices emerge? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback. We address their questions and comments below. >First, the paper focuses exclusively on modular arithmetic -- but grokking is a more general phenomenon and it has been observed also in language models among others. This has to be clarified somehow: even though the paper helps us better understand the process of learning structured features in a very specific task that relates to modulo arithmetic, its claims and evidence cannot be generalized beyond such tasks. Like prior works (e.g. [1,2]), we focus on the setting of modular arithmetic because (1) these tasks clearly exhibit the sharp transition from trivial-to-perfect generalization, and (2) there exist analytic solutions to the task that we can compare the learned algorithms to. In particular, there is substantial evidence that the algorithm implemented by neural networks is the Fourier Multiplication Algorithm [2,3]. Moreover, it appears that non-parametric methods without the ability to learn features are unable to learn modular arithmetic, hence these tasks are a strong test-bed for the predictor power of feature learning through AGOP. [1] Power et al., Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets, Mathematical Reasoning in General Artificial Intelligence Workshop, ICLR 2021. [2] Nanda et al., Progress measures for grokking via mechanistic interpretability, ICLR, 2023. [3] Zhong, et al. "The clock and the pizza: Two stories in mechanistic explanation of neural networks." NeurIPS 2023. >The paper certainly contributes in the fundamental understanding of grokking -- but it does not explain some key characteristics of grokking (such as, what determines how many long it will take, say in epochs, until the model learns those structured features?) While this question is beyond the scope of this paper, we do observe the rate at which features evolve through the AGOP alignment and circulant deviation progress measures. We find that while the features make progress linearly through iteration, generalization exhibits a sudden phase transition, suggesting error is a sharp function of feature quality specifically. We are exploring theoretical analyses to potentially identify the threshold when better features lead to improved generalization. >Is there an info theoretic way to show why circulant matrices emerge? One possibility to understand why circulant matrices emerge on these tasks is through their effect on the Mahalanobis kernel. As circulant matrices are diagonalized by the DFT matrix, we can view inner products through the block-circulant matrix as applying the DFT to each of the one-hot encoded integer arguments, then re-weighting the frequency components by the (complex) eigenvalues of the circulant sub-blocks. We suspect the frequency reweighting induces linear separability of the DFT vectors, enabling generalization with kernel regression. We are currently exploring this direction.
null
null
null
null
null
null
Autoformulation of Mathematical Optimization Models Using LLMs
Accept (poster)
Summary: This paper proposes an approach for autoformulation—the automated creation of solver-ready mathematical optimization models from natural language problem descriptions. The authors frame autoformulation as a search problem and leverage Large Language Models (LLMs) with Monte-Carlo Tree Search (MCTS) to systematically generate, explore, and evaluate candidate optimization formulations. Claims And Evidence: - I personally disagree with the claim that "optimization modeling follows a three-step process." Recent papers in operations research primarily focus on proving why their (human-derived) formulations are theoretically correct. While understanding the problem description in text and generating execution code can be useful, the core aspect of optimization modeling—the theoretical justification of the formulation—is entirely missing from this perspective. - A key limitation of this work is that the task itself may not reflect a real-world need—one that domain scientists would actually use. Can you justify the practical use case of such a system and provide an estimate of who, and how many, would realistically adopt it? Otherwise, creating such a task solely for benchmarking against potentially irrelevant metrics raises concerns about its practical impact. Methods And Evaluation Criteria: yes. Theoretical Claims: No theory is provided in this paper. Experimental Designs Or Analyses: The authors empirically evaluate the proposed method and demonstrate that it achieves better results. Supplementary Material: YES. I read every section in the appendix. Relation To Broader Scientific Literature: The developed software can be beneficial to the operations research community. Essential References Not Discussed: NA Other Strengths And Weaknesses: **Strengths:** - The experimental evaluation is solid. **Weaknesses:** - The paper is purely empirically driven. Given its focus, it may be more suitable for an EMNLP conference. Other Comments Or Suggestions: NA Questions For Authors: There is already extensive research on using LLMs with MCTS across various tasks aimed at improving automation, so the main algorithm in this work is not novel. Additionally, a key limitation of this work is that the task itself may not be a *real* task—one that domain scientists would actually use. Why create such a task solely for benchmarking against metrics that may lack practical relevance? Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: *We appreciate the reviewer’s detailed and thoughtful evaluation.* --- ### [P1] Engineering vs theory Thank you for the thoughtful comment. Our framing of optimization modeling as a three-step process (requirements gathering $\rightarrow$ mathematical model $\rightarrow$ computational model) is intended to reflect the **engineering** side of the modeling pipeline. We agree that in many academic OR contexts, the emphasis lies in **theoretical** justifications—such as proving optimality bounds, analyzing relaxations, or exploring structural properties—which are essential to advancing the science of optimization. While theoretical work is crucial, we also believe our focus remains valuable (further discussed in **[P2]**). For domain scientists who are not optimization experts, this technology can help democratize modeling by lowering the barrier to entry. For experts, the autoformulator can accelerate prototyping and iteration. When theoretical validation is ultimately required, our system can automate time-consuming or error-prone steps, surface promising candidate formulations, and free up modeling experts to focus on deeper theoretical analysis. **Actions taken.** We have updated the manuscript to acknowledge this theoretical dimension more clearly. Specifically, we revised lines `L10–L16 R` (Introduction) to highlight the role of theoretical modeling and added a discussion in `L405–L420 R` (Discussion) on future directions that connect autoformulation with theoretical reasoning. --- ### [P2] Real-world utility **Practical utility.** We believe that in many domains—such as logistics, healthcare, energy, and manufacturing—there exists a real and growing need to bridge the gap between domain expertise and optimization modeling expertise. Gurobi’s "State of Optimization Report 2023" [R1–R2], based on a survey of 394 commercial users, identified optimization modeling as a rapidly growing area of demand, while also highlighting a significant skills gap. Just as coding LLMs have empowered non-expert programmers to build software, autoformulators can lower the barrier to modeling, reduce translation bottlenecks, and empower domain experts to experiment with formal optimization tasks. For experienced modelers, these systems can accelerate iteration, support exploratory modeling, and free cognitive effort for higher-level design and analysis. **Emerging research & industry focus.** Autoformulation is gaining traction in both research and industry. On the research side, recent systems such as Optimus and Chain-of-Experts (both appeared at ICLR 2024) highlight growing interest in the scientific and technical challenges of this space. We note that the benchmarks, tasks, and metrics we used are consistent with those employed in prior works. Industry interest further reinforces the relevance of this problem: (1) Gurobi’s modeling assistant [R3], (2) AWS’s optimization AI tools [R4], and (3) IBM’s research on AI-assisted modeling [R5] all address related aspects of the autoformulation pipeline. We view these developments as strong signals of real-world utility, and our system contributes foundational capabilities to this emerging and impactful area. **Actions taken.** Thank you for raising this important point. We have added a new appendix section outlining practical and research use cases of our system. [R1, R2] stage.gurobi.com/resources/report-state-of-mathematical-optimization-2023/, stage.gurobi.com/resources/report-state-of-mathematical-optimization-in-data-science-2023/ [R3] gurobi-ai-modeling.readthedocs.io [R4] aws.amazon.com/cn/blogs/industries/optimization-in-the-era-of-generative-ai/ [R5] research.ibm.com/publications/enhancing-decision-making-through-the-integration-of-large-language-models-and-operations-research-optimization-bridge-talk --- ### [P3] Highlighting novelty We introduced a novel MCTS-based technique tailored to the structure of optimization modeling tasks, grounded in the core challenges **[C1-C3]**, including (1) domain-specific MCTS strategy to enhance hierarchical exploration; (2) SMT-based pruning of trivial equivalences, improving search efficiency by two orders of magnitude; and (3) comparative scoring of candidate formulations, enhancing feedback for search guidance. Empirically, we show that generic, off-the-shelf approaches fail to address key challenges in evaluating formulation quality (`S5.2`) and eliminating trivial redundancies (`Fig. 4`). In contrast, our approach outperforms prior state-of-the-art methods—including LLMs specifically finetuned for autoformulation—achieving new best results on formulation correctness across multiple challenging benchmarks. --- *Thanks again; we hope our responses addressed your concerns and would appreciate your consideration in updating the score. We welcome further discussions.* --- Rebuttal Comment 1.1: Comment: Thanks for answering my questions. As you mentioned in your reply—referring to Gurobi and the Amazon link—this work might be better appreciated by a conference in operations research, where the problem description itself may evolve over time, making classic formalizations less effective. In such cases, a tool for automatic problem formalization is essential, aligning with the field's emphasis on future directions in automation and the auto-formation of mathematical optimization problems. Regarding the technical innovation, I personally feel that the contributions are primarily focused on automating the optimization and problem-formalization pipeline. As a result, it may be challenging for this community to extract new insights from the work, making it feel relatively less novel. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful follow-up. We appreciate your acknowledgment that the autoformulator aligns with future-facing research directions, particularly in automating mathematical problem formalization. --- ### [P4] Suitability for conference We first want to highlight that our submission is aligned with the conference's [Call for Papers](https://icml.cc/Conferences/2025/CallForPapers), which explicitly welcomes `Application-Driven Machine Learning`, defined as “innovative techniques, problems, and datasets that are of interest to the machine learning community and driven by the needs of end-users.” Our work, under the “Applications” primary area, directly responds to this call by introducing novel methods to address the growing gap between domain expertise and optimization expertise in fields like logistics, healthcare, and engineering. This line of work has also gained momentum at recent ML venues, which further supports its suitability to be presented at ICML 2025: * **NL4OPT** (NeurIPS 2022, Competition Track) * **Chain-of-Experts** (ICLR 2024) * **Optimus** (ICML 2024) * **LLMOPT** (ICLR 2025) These papers explore natural language to optimization model pipelines using LLM frameworks. Our method extends this thread by introducing hierarchical search, comparative scoring, and symbolic pruning, enabling tractable and scalable exploration of the formulation space. --- ### [P5] Technical novelty While our method builds on known components, its primary ML contribution lies in combining neural and symbolic reasoning in a domain-specific structure, where generic approaches fail. This is evidenced both empirically (e.g., comparative results in `S5.2` and `Fig. 4`) and technically (e.g., SMT-based pruning, comparative scoring, and domain-specific MCTS), culminating in state-of-the-art performance across multiple competitive benchmarks. More broadly, our work is relevant to an important ML research trend: building compound systems capable of structured reasoning in real-world domains. In areas like program synthesis, tool-using agents, and scientific modeling, research increasingly focuses on how to scaffold LLMs with **symbolic tools** and **problem-specific structure**—from execution traces and solver feedback to formal mathematical constraints—to produce outputs that are not only fluent, but also correct, aligned, and executable. As such, beyond the autoformulation-specific utility, we believe our work also offers generalizable insights for structured reasoning tasks—through exploiting domain-specific structure, symbolic tools, and efficient search algorithms. --- We believe this community is well-positioned to advance interdisciplinary research that intersects learning, structured search, and symbolic reasoning—and we would be eager to contribute to that conversation.
Summary: This paper proposes a search-based autoformulation of mathematical optimization problems. The authors provide a formal definition of autoformulation and use MCTS to construct the formulation. Experiments demonstrate the method can outperform the baselines. Claims And Evidence: The paper proposes three challenges of autoformulation. "(1) the vast, problem-dependent hypothesis space, (2) efficient and diverse exploration of this space under uncertainty, and (3) evaluation of formulation correctness against problem description." However, the paper seems to address only the second challenge. Methods And Evaluation Criteria: 1. My biggest concern is the unfair experiment settings. The MCTS can have many rollouts. If I understand correctly, the authors think it is successful in solving the problem if any one of the rollouts is correct. This is unfair since the baselines have only one "rollout" during evaluation. As shown in Figure 5, if we set the rollout number to be 1, the accuracy in Figure 5 is under 36%, which is inferior to the 38% of ORLM. 2. The author may want to clarify how to choose the best answer in the rollouts. In my understanding, the method should provide a final answer among the answers given in different rollouts, instead of providing all the answers as final outputs. 3. In this paper, the depth of the MCTS is under 5. The complex MCTS seems not necessary for the shallow search depths. Theoretical Claims: This paper does not contain any proof for theoretical claims. Experimental Designs Or Analyses: 1. The dataset is limited. The paper only conducts experiments on two datasets in the main paper and more datasets are needed, such as the MAMO dataset and ComplexOR dataset. 2. Some experimental results are missing. I wonder whether the results of Reflextion/Chain-of-Experts/Optimus are missing on the IndustryOR dataset. Supplementary Material: Yes. I wonder whether Appendix D wants to show the solving effectiveness of the formulation provided by MCTS. If not, It seems unrelated to the paper's contents. Relation To Broader Scientific Literature: This paper is related to autoformulation for operations research problems. Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: Please see the comments above. I suggest the authors refine the experiments part of this paper. Questions For Authors: Please see the comments above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: *We appreciate the reviewer’s detailed and thoughtful evaluation and positive feedback.* --- ### [P1] Addressing challenges Thank you for raising this point. However, we believe this may stem from a misunderstanding. Please allow us to clarify how our method addresses all three challenges: | Challenge | Description | Our approach | Location in paper | |----------|-------------|--------------|-------------------| | **[C1] Problem-dependent hypothesis space** | The formulation space is vast and problem-specific, making manual definition infeasible. | LLMs serve as problem-conditioned hypothesis generators that implicitly define the hypothesis space; candidate components are expanded hierarchically.| `L238–L259 L` | | **[C2] Efficient search** | Explore the space efficiently under uncertainty, avoiding redundant or unpromising paths. | MCTS over a hierarchical decomposition balances exploration-exploitation; symbolic pruning eliminates trivial equivalences. | `L182–L208 R`; `L260 L–L240 R` | | **[C3] Model evaluation** | Evaluate whether generated formulations faithfully represent the original problem. | Partial evaluation via node-level ranking during expansion; comparative scoring of complete formulations. | `S3.2.2`; `S3.2.3` | --- ### [P2] Evaluations Thank you for pointing this out—we agree with your concern. Our method was originally evaluated using a pass@N metric (i.e., success if any of N iterations is correct), while baseline comparisons were reported under pass@1. **Actions taken.** We have re-evaluated ORLM under pass@N by generating N independent samples to match the iteration count (Note: our method still outperforms non-ORLM baselines at N=1). We report in [this table](https://imgur.com/a/1DUogjH) this fairer comparison (now added to the appendix), and clarified evaluation metrics in the main text. We observe that our approach maintains a clear advantage even under pass@N. This reflects a core strength of our approach: it performs structured exploration over functionally diverse formulations, which ORLM does not inherently support. We appreciate your feedback, it helped improve the rigor of our evaluation. --- ### [P3] Selection In our method, each MCTS iteration produces a complete formulation along with a score from our evaluation mechanism, allowing us to select the highest-scoring one if desired. **Actions taken.** We now report best-of-N accuracy based on score-ranked outputs in [this table](https://imgur.com/a/0haw2V6) (i.e., selecting best-of-N formulations using greedy selection). Our selected formulations continue to outperform baselines (specifically, our mechanism selects the best formulation on >90% of problems), supporting the effectiveness of our scoring mechanism. --- ### [P4] MCTS Thank you for raising this point. We refer the reviewer to our response to Reviewer `fWQR` (**[P1]**) for a detailed discussion of why optimization modeling benefits from hierarchical decomposition and MCTS-based structured exploration. **Empirical evidence.** We include two additional baselines: (1) a Tree-of-Thought using a different tree search strategy, and (2) a naive sequential sampling baseline without tree search. Our method consistently outperforms both, indicating that even at shallow depths, our MCTS framework yields meaningful gains through principled exploration and pruning. --- ### [P5] Additional benchmarks We agree that broader benchmarking is important. In response, we added experiments on both MAMO (focusing on the more challenging ComplexLP subset) and ComplexOR. Due to time constraints, we prioritized ComplexLP over EasyLP, which is relatively saturated. On both benchmarks, our method outperforms all baselines, further validating its effectiveness. Results have been added to the updated [Table 1](https://imgur.com/a/Rl5AGhv). --- ### [P6] Other comments * **Baseline results on IndustryOR.** We attempted to run Chain-of-Experts and Optimus on IndustryOR using their released code but encountered compatibility issues. As noted by the IndustryOR authors, these methods require non-trivial adaptation. Due to time constraints, we were unable to complete this during the rebuttal, but we plan to include these results in a future version (if accepted). * **App D.** App D is not meant to evaluate our method directly, but to explore how solver and formulation choices impact downstream performance. While not central, we included it for context and will trim it in a future version to improve clarity. --- *Thanks again; we hope our responses addressed your concerns and would appreciate your consideration in updating the score. We welcome further discussions.*
Summary: This work studies the autoformulation for mathematical optimization models, or the task of building an optimization model from natural language prompts describing the problem. The approach begins by defining the construction of an optimization model as a hierarchical task, with steps of selecting parameters and decision variables, an objective function, equality constraints, and inequality constraints. These tasks are treated hierarchically using Monte Carlo tree search, where LLMs are used to generate candidate nodes. The authors provide some creative ways to prune trivially equivalent nodes, as well as to score the candidate nodes. Claims And Evidence: The claims that the tree search, pruning steps, and scoring steps are all effective are well evidenced by comparison against LLM benchmarks and ablation studies in Sections 5.1-5.4. Methods And Evaluation Criteria: The proposed method is tested on two problem sets, NLP4OPT and IndustryOR, and compared against several recent methods for autoformulation. The main evaluation criterion used is the proportion of generated formulations returning the correct optimal objective value. While this is an easy proxy for model correctness, it would be more interesting to have even a limited number of instance that are checked by a human optimization expert (i.e., is this the model an expert would have built?). Moreover, the majority of models tested are linear programs; it would be more interesting to see how the proposed approach generalizes (or requires improvements) in nonlinear or discrete settings. Theoretical Claims: The only theoretical claim made appears to be that the proposed SMT-solver can correctly prune trivially equivalent models, which is described in the Appendix. Experimental Designs Or Analyses: The experiments are designed well, and several ablation experiments are performed. It would be interesting to see more evaluation criteria in addition to just having the correct objective value. For example, how many of the models correctly model, overapproximate, or underapproximate the feasible region? How many of the models have variables and parameters correctly defined? Supplementary Material: The provided appendices describe the SMT pruning method, the LLM prompts used, and the categorization of test instances Relation To Broader Scientific Literature: This paper follows on a recent line of work on autoformulation of optimization models. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: *We appreciate the reviewer’s detailed and thoughtful evaluation and positive feedback.* --- ### [P1] Additional analysis Thank you for this thoughtful suggestion. While objective-value correctness is a standard metric in prior work, we agree that it is an imperfect proxy. To address this, we conducted a targeted expert evaluation on 18 autoformulated problems from the ComplexOR benchmark. An optimization expert manually reviewed each generated model and assessed the correctness of its major components—**(1)** decision variables, **(2)** objective function, **(3)** equality constraints, and **(4)** inequality constraints—along with an overall correctness label. This allowed us to identify sources of modeling errors and compare agreements between expert assessments and our objective-value-based proxy. | Component | Dec var | Obj fun | Eq const | Ineq const | Agreement % | |---|---|---|---|---|---| | Error rate among incorrect models | 23% | 15% | 54% | 54% | 82% | **Analysis.** The expert's analysis indicates that the most common sources of error were related to constraint modeling---particularly inequality constraints (e.g., misclassification as equality constraints, incorrect formulation, or omission)---with both equality/inequality constraint errors accounting for over half of all incorrect formulations. Additionally, we observed 82% agreement between the expert’s correctness judgments and the objective-value proxy. In two cases, the expert flagged structural errors despite a correct objective value; in two others, the expert deemed the model correct despite a mismatch in optimal objective values. This suggests that objective-value correctness is a useful but imperfect proxy for full structural accuracy. **Actions taken.** We have incorporated these findings into the manuscript and now highlight expert evaluation as a valuable future direction for benchmark development and model assessment. --- ### [P2] Other problem types Thank you for this observation. Existing benchmarks in this space—NLP4OPT, IndustryOR, MAMO, and ComplexOR—currently consist exclusively of LP and MILP problems, which shaped the scope of our evaluation. Expanding to nonlinear or more complex problems is an important direction for future work, though it first requires the development of suitable benchmarks. We note that LPs and MILPs remain highly relevant in practice: according to [R1], 61% of real-world OR problems are MILPs and 41% are LPs. These classes also present the same meaningful modeling challenges **[C1-C3]** for autoformulation, making them a useful starting point. **Additional results.** That said, we agree that broader evaluations are valuable. Based on your comment and related feedback, we extended our experiments to additional datasets to further strengthen our evaluations. While these new benchmarks, ComplexOR and the ComplexLP subset of MAMO, still consist of LPs and MILPs, they feature more complex problems and serve as a stronger assessment of generality. Our method outperforms all baselines on both, as shown in our updated [Table 1](https://imgur.com/a/Rl5AGhv). [R1] Gurobi State of Mathematical Optimization 2023 Report https://stage.gurobi.com/resources/report-state-of-mathematical-optimization-2023/ --- *Thanks again; we hope the reviewer’s concerns are addressed and welcome further discussions.*
Summary: This paper introduces autoformulation, the automated translation of natural language problem descriptions into solver-ready mathematical optimization models, addressing the reliance on human expertise in traditional modeling. The proposed method integrates LLMs with MCTS to hierarchically decompose and systematically explore the model space, enhanced by symbolic pruning to eliminate redundant formulations and LLM-based evaluation for guided search. Empirical results on linear and mixed-integer programming benchmarks demonstrate significant performance improvements over existing approaches, showcasing the effectiveness of combining hierarchical exploration, pruning, and dual reward mechanisms. Claims And Evidence: see the section Strengths And Weaknesses Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Reasonable experimental designs and analyses Supplementary Material: I have reviewed all the supplementary material Relation To Broader Scientific Literature: This paper bridges advancements in LLMs and MCTS by introducing a hierarchical, pruning-enhanced framework for automated optimization modeling, extending their applications to mathematical formulation challenges while addressing efficiency and correctness evaluation gaps in prior autoformulation research. Essential References Not Discussed: Essential References are well-discussed. Other Strengths And Weaknesses: ### **Strengths** 1. The paper is well-organized and easy to follow, with a clear structure that enhances readability. 2. It gives a deep analysis of the current challenges in this direction 3. Comprehensive ablation experiments are conducted to evaluate each component of the proposed pipeline. ### **Weaknesses** I am not an expert in this field, so my current stance is a tentative weak acceptance. My recommendation could change following insights from other reviewers’ comments or if the authors provide more clarifications during the rebuttal period. The authors propose a pipeline that leverages LLMs integrated within an MCTS framework for autoformulation tasks. However, I have several concerns that merit further explanation. First, the tree search employed in the proposed method is notably shallow (consisting of only four layers). I am curious about how the use of MCTS in this context yields substantial benefits over more straightforward or naive search algorithms. Furthermore, the pipeline bears some resemblance to the tree-of-thought framework, albeit with what appears to be more advanced search strategies tailored specifically for autoformulation. I would appreciate a detailed comparison between the proposed method and the tree-of-thought approach. Including the tree-of-thought method as one of the baseline comparisons in the experimental evaluation would provide a more comprehensive context for assessing the merits of the proposed approach. Additionally, I suggest that the authors enhance the figure captions with more detailed explanations. As it stands, I found myself repeatedly referring back to the main text to fully understand the figures. Other Comments Or Suggestions: No. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *We appreciate the reviewer’s detailed and thoughtful evaluation and positive feedback.* --- ### [P1] MCTS motivation Thank you for the thoughtful question. We interpret this as raising two related concerns: **(1)** why use a tree-based search framework at all, and **(2)** why choose MCTS specifically. **(1) Tree-based search rationale.** Although the search depth is modest, optimization models naturally decompose into hierarchical components—decision variables, objective function, and constraints—making tree-based search well-suited. This structure enables: * Clearer credit assignment, isolating which components contribute to success; * Efficient feedback sharing across subtrees; * Tractable redundancy pruning via SMT, which operates more effectively at the component level. These factors contribute to focused exploration, efficient reuse of components, and better search guidance. These benefits are lost when searching directly over complete formulations, resulting in a flat, entangled space. **(2) Why MCTS.** We choose MCTS over alternative tree search methods (e.g., DFS, BFS, A*) for two reasons: * Exploration under uncertainty. MCTS balances exploration and exploitation using value estimates and UCB, making it better-suited for the inherent ambiguity in autoformulation, as it progressively focuses on promising branches. This stands in contrast to deterministic tree-based algorithms. * Feedback-driven search. MCTS uses backpropagation to update its search policy, whereas other methods follow fixed traversal policies (without adjusting the search from observed outcomes). **Empirical evidence.** We point to three existing lines of evidence supporting the efficacy of our MCTS framework: * Sustained exploration: our method continues discovering novel, functionally distinct formulations over iterations (`Fig 5`); * Search efficiency: SMT-based pruning reduces redundancy by two orders of magnitude, improving efficiency (`Fig 4`); * Improved search guidance: our comparative scoring method provides more robust feedback, enhancing search guidance (`S5.2`). **Additional results.** Following your feedback, we further isolate our MCTS method's contributions through two baselines: * A Tree-of-Thought (DFS) baseline with the same hierarchy but no uncertainty guidance or search feedback; * A naive sequential sampling baseline (same hierarchy) without structured search or pruning (i.e., each component is sampled sequentially, conditioned on the partial formulation). Results from these additional comparisons are provided in **[P2]** and are referenced for further analysis. --- ### [P2] Tree-of-Thought Thank you for this comment. Please find a conceptual comparison below: | **Dimension** | **Autoformulator** | **Tree-of-Thought (ToT)** | |---|---|---| | **Tree structure** | Structured, domain-specific: variables → objective → constraints | Free-form, unstructured intermediate thoughts | | **Node expansion** | LLM-generated components + SMT-based pruning | LLM-generated thoughts; no symbolic pruning | | **Node evaluation** | LLM-based comparative scoring of nodes and full models | LLM self-evaluation of nodes in isolation | | **Search strategy** | MCTS with UCT (uncertainty-guided, feedback-driven) | Greedy, DFS, or BFS; no feedback or uncertainty modeling | Compared to ToT, our approach incorporates (1) structured, domain-specific decomposition, (2) uncertainty-aware, feedback-guided exploration, and (3) symbolic pruning to improve efficiency and reduce redundancy. **Empirical comparison.** We include both Tree-of-Thought and the sequential sampling baseline in our [updated results](https://imgur.com/a/2NjUrAA). Our method consistently outperforms both across all benchmarks, underscoring the importance of structured decomposition, feedback-driven search, and redundancy pruning. Compared to ToT, MCTS enables more effective exploration by leveraging uncertainty and cumulative feedback to avoid suboptimal branches and refine search trajectories. The comparison with the naive baseline highlights the limits of decomposition alone: without guided search or pruning, performance deteriorates, and manual inspection reveals more invalid or redundant formulations. **Actions taken.** We have integrated these additional baseline results into the updated manuscript in `Table 1`. --- ### [P3] Other comments * **Figure captions:** Thank you for this suggestion; we have revised the captions to include more detailed and self-contained explanations. --- *Thanks again; we hope our responses addressed your concerns and would appreciate your consideration in updating the score. We welcome further discussions.* --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses. With my concerns addressed, I have decided to maintain my current score. --- Reply to Comment 1.1.1: Comment: Thank you, we are glad we could address your concerns and appreciate your constructive feedback that made our work better! The Authors of #10013
null
null
null
null
null
null
Adaptive Exploration for Multi-Reward Multi-Policy Evaluation
Accept (poster)
Summary: This paper studies sample-efficient exploration for the multi-reward multi-policy evaluation problem, which aims to simultaneously evaluate all target policy-reward pairs. The authors use an instance-specific lower bound to guide the design of their proposed efficient exploration method. Furthermore, they propose a convex approximation for bound computation and present numerical simulations to demonstrate the effectiveness of their method. Claims And Evidence: It seems that the claims are supported by sufficient evidence and interpretation. Methods And Evaluation Criteria: The proposed methods and evaluation criteria seem reasonable for the problem and application at hand. Theoretical Claims: I did not check the proofs in detail but reviewed the insights behind the claims. Experimental Designs Or Analyses: The experimental designs and analyses seem sound and valid. Supplementary Material: I partially reviewed the supplementary material, specifically the numerical results section. Relation To Broader Scientific Literature: Efficient multi-policy evaluation is useful for conducting ablations or tuning various algorithm hyperparameters, while efficient multi-reward evaluation can serve as a subroutine in MORL algorithms. Essential References Not Discussed: There are no essential related works that are missing from the citations or discussion in the paper. Other Strengths And Weaknesses: Strength 1. This paper provides an instance-specific sample complexity lower bound that separately considers P and r. Furthermore, based on this lower bound, the authors identify the main factor that influences the scaling of the sample complexity lower bound. 2. To address the non-convexity of the lower bound, the authors propose an appropriate convex envelope. Weakness 1. It would be helpful if the difference or relationship between this work and the main prior work, MR-NaS, were explained more clearly. Other Comments Or Suggestions: 1. Could this be a potential typo?: In the line 176 second column, Alt=\empty is empty -> Alt is empty? 2. Could this be a potential typo?: In the line 2 in Algorithm 1, inf -> arginf? Questions For Authors: 1. In my understanding, the first bullet below Proposition 4.1 suggests that if the condition is satisfied, then Alt is empty, leading to a no-confusing scenario. Is my understanding correct, or am I missing something? 2. I would appreciate a clarification on how the authors' algorithm differs from the prior work MR-NaS in terms of algorithmic aspects, as I had difficulty understanding this distinction. 3. If there are no algorithmic differences, should I understand the contribution as providing a theoretical sample complexity bound for applying MR-NaS to the Multi-Reward Multi-Policy Evaluation problem? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thanks the reviewer evaluating our work and their positive appreciation of our paper. Below, we address your questions: > In my understanding, the first bullet below Proposition 4.1 suggests that if the condition is satisfied, then Alt is empty, leading to a no-confusing scenario. Is my understanding correct, or am I missing something? The reviewer's understanding is correct: as $\epsilon$ increases or $\gamma$ decreases, the condition is more likely to hold, which leads to a no-confusing scenario. In the text, we mistakenly wrote the opposite due to a last-minute change in the direction of the inequality in Proposition 4.1. We will correct this in the final version of the paper. > I would appreciate a clarification on how the authors' algorithm differs from the prior work MR-NaS [...] If there are no algorithmic differences, should I understand the contribution as providing a theoretical sample complexity bound for applying MR-NaS to the Multi-Reward Multi-Policy Evaluation problem? We thank the reviewer for the opportunity to clarify this point. This algorithmic structure is actually inspired by approaches in Best Arm Identification in Bandit problems (Garivier \& Kaufmann, 2016), where it is common to apply a Track-and-Stop procedure. This procedure computes an exploration strategy $w_t^*$ at each time step and sample the next action from it (for more examples, see also Degenne \& Koolen, 2019; Al Marjani et al., 2021). While the general algorithmic structure is similar in all these works, the specific optimization problem in computing $w_t^*$ differs because of the way the lower bound is derived here (i.e., how the reward set and its geometry affect sample complexity). In Russo \& Vannella (2024), the complexity depends on value gaps $\Delta_r^\pi(s,a)=V_r^\pi(s)-Q_r^\pi(s,a)$ since it is a Best Policy Identification problem, whereas our analysis for Policy Evaluation involves $\rho_r^\pi(s,s')=V_r^\pi(s')-\mathbb{E}_{s''\sim P(s,\pi(s))}[V_r^\pi(s'')]$. Hence, in BPI a different analysis is needed to derive the optimization problem. Conceptually, our work shows how this overall strategy is more general than just identifying the best policy, and can be applied to other problems (such as policy evaluation). We will highlight this distinction more clearly in the revised version of our manuscript. > - Could this be a potential typo?: In the line 176 second column, $\mathrm{ Alt}=\emptyset$ is empty $\to$ ${\rm Alt}$ is empty? > - Could this be a potential typo?: In the line 2 in Algorithm 1, $\inf \to \arg\inf$? Yes, both of these are indeed typos. We appreciate the reviewer for catching them and will correct them in the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. I maintain the original positive score. --- Reply to Comment 1.1.1: Comment: Dear reviewer, thank you again for your review and for maintaining a positive assessment. We sincerely hope our clarifications have addressed your concerns. Should you have any additional questions or feedback, please do not hesitate to let us know. With these clarifications, we hope to reinforce your overall confidence in our submission.
Summary: This paper investigates the problem of efficiently evaluating multiple policies across multiple reward functions in an online discounted setting. The authors provide an instance-specific lower bound on sample complexity and leverages it to design an efficient exploration strategy, adapting the MR-NaS exploration scheme. The paper provides theoretical guarantees on sample complexity and demonstrates the effectiveness of the approach through experiments in tabular environments. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, I checked all the theorems, and they looked good to me. Experimental Designs Or Analyses: Yes, the experiments look good to me. Supplementary Material: Yes, I checked the proofs, and they look good to me. Relation To Broader Scientific Literature: The key contribution is of this paper is that this is the first work for multi-reward multi-policy evaluation. It provides a lower bound for the sample complexity and proposes a provably sample-efficient algorithm, which lays solid theoretical foundation for this topic. However, a limitation is that the discussion is only restricted in the tabular setting. With function approximation, computing $w^*_t$ can be very hard and inefficient. I would suggest the authors try finding an efficient surrogate of this step and implementing their algorithms in more complexed environments, which would make a greater impact. Essential References Not Discussed: I wonder the relationship of this work with reward-free RL. It seems that reward-free RL can potentially solve this problem, since reward-free RL can collect an informative dataset without the knowledge of reward and guarantee accurate estimation of the value functions given any reward. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their evaluation of our work and their positive remarks. > The key contribution is of this paper is that this is the first work for multi-reward multi-policy evaluation. [...] However, a limitation is that the discussion is only restricted in the tabular setting. With function approximation, computing $w^*_t$ can be very hard and inefficient. Indeed, we plan to extend our framework to include function approximation. However, as this is the first work specifically focused on multi-reward multi-policy evaluation, we focused on having solid theoretical results in the tabular regime (e.g., characterizing the set of confusing models and deriving a reward-free sample-complexity bound). Furthermore, many theoretical investigations often do not include empirical evaluations; we believe our experiments in the tabular setting offer valuable insights into the practicality of our approach and can inspire future lines of research involving function approximation. > I wonder the relationship of this work with reward-free RL. It seems that reward-free RL can potentially solve this problem, since reward-free RL can collect an informative dataset without the knowledge of reward and guarantee accurate estimation of the value functions given any reward. We thank the reviewer for the opportunity to clarify this point. Note that our setting encompasses reward-free RL but not vice versa, since our problem formulation can handle both finite and convex reward sets (thus covering the entire set of rewards, as in reward-free RL). Furthermore, the instance-dependent perspective directly accounts for the complexity of the reward set, whereas reward-free RL typically does not. Moreover, while reward-free RL can indeed collect an informative dataset without knowledge of the reward, for problems with a small number of rewards, using reward-free RL may unnecessarily increase the sample complexity. Lastly, in Corollary 4.6, we provide an instance-dependent sample-complexity result for the entire set of rewards (as in reward-free RL). This result is novel, and we emphasize that such instance-dependent analyses are not common in the reward-free setting. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response and will maintain my score. --- Reply to Comment 1.1.1: Comment: Dear reviewer, thank you again for your review and feedback. We appreciate your positive assessment and hope to have addressed your concerns. If you have any further questions or require additional details, please feel free to reach out. We hope our explanations have offered further clarity on the quality and intent of our work.
Summary: This paper addresses the problem of online multi-reward multi-policy evaluation in reinforcement learning, aiming to estimate the value functions of multiple policies across diverse reward sets with (ε, δ)-PAC guarantees. The authors derive an instance-dependent sample complexity lower bound that scales with a value deviation measure, capturing the interaction between rewards and transitions. They propose MR-NaS, an adaptive exploration algorithm that leverages this bound to guide efficient data collection. The algorithm extends prior work on Multi-Reward Best Policy Identification (BPI) and is shown to be asymptotically optimal. Experiments in tabular environments demonstrate MR-NaS's effectiveness compared to baseline methods. Claims And Evidence: See the comments below Methods And Evaluation Criteria: See the comments below Theoretical Claims: See the comments below Experimental Designs Or Analyses: See the comments below Supplementary Material: See the comments below Relation To Broader Scientific Literature: See the comments below Essential References Not Discussed: See the comments below Other Strengths And Weaknesses: Strengths: - The work is the first to provide instance-dependent sample complexity bounds for multi-reward policy evaluation, addressing a gap in PAC guarantees for this setting. - The analysis introduces a novel value deviation measure and connects it to sample complexity, offering insights into how reward structure impacts evaluation difficulty. - Applications in areas like language model fine-tuning and robotics motivate the problem, highlighting real-world significance. Weaknesses: - The communicating MDP assumption may restrict applicability to environments with transient states. - Experiments are limited to tabular domains; extensions to high-dimensional or continuous spaces are not discussed. Practical aspects (e.g., handling model estimation errors in non-tabular settings) are underdeveloped. Other Comments Or Suggestions: NA Questions For Authors: - How does sample complexity scale with the number of policies/rewards? A scalability analysis would help assess broader utility. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for evaluating our work and acknowledging the novelty of our paper. Below, we address your concerns in detail. >The communicating MDP assumption may restrict applicability to environments with transient states. We note that our focus on communicating MDPs is a standard assumptions in Best Policy Identification (see Russo \& Vannella, 2024 and Al Marjani et al., 2021). For weakly communicating MDPs, as $\delta \to 0$, transient states have negligible impact on the sample complexity asymptotically. Our main goal here is to illuminate how MDP-dependent quantities shape sample complexity for multi-reward multi-policy evaluation. > Experiments are limited to tabular domains; extensions to high-dimensional or continuous spaces are not discussed. Practical aspects (e.g., handling model estimation errors in non-tabular settings) are underdeveloped. Extending our theoretical framework and algorithms to non-tabular settings is indeed an exciting future direction. As this is the first work on multi-reward multi-policy evaluation with instance-dependent PAC guarantees, we prioritized a rigorous treatment in the tabular setting (e.g., characterizing confusing models, establishing a reward-free sample-complexity bound). Furthermore, many theoretical investigations often do not include empirical evaluations; we believe our experiments in the tabular setting offer valuable insights into the practicality of our approach and can inspire future lines of research involving function approximation. > How does sample complexity scale with the number of policies/rewards? A scalability analysis would help assess broader utility. The result in Theorem 4.5 shows that the sample complexity depends on the _worst_ reward-policy pairs in the sets of rewards and policies considered. These “worst” pairs can differ from one MDP to another—what is easy in one MDP could be difficult in another. This captures the inherent difficulty of the reward-policy sets and is not just a sum over individual pairs. For example, in the generative setting, choosing a uniform distribution $\omega(s,\pi(s)) = \frac{1}{|S|}$ yields a bound on the order of $ O\left(\max_{\pi,r} \frac{\gamma^2 |S| \max_s ||\rho_r^\pi(s)||^2}{4 \epsilon^2 (1-\gamma)^2}\right), $ and we have $\|\rho_r^\pi(s)\|_\infty = O\left(\frac{1}{1-\gamma}\right)$. Consequently, this yields a worst-case scaling of $ O\left(\frac{\gamma^2|S|}{4\epsilon^2 (1-\gamma)^4}\right), $ which is independent of the number of policies/rewards. We will ensure to clarify these points further in the final manuscript.
Summary: This paper studies the problem of devising an optimal data-collection policy that can evaluated policies in multi-reward and multi-policy setting. The paper adopted a PAC sample complexity perspective over finite or convex set of rewards. The analysis of the problem revolves around the set of alternative set and constructing lower bounds for related concepts. The proposed algorithm for the problem is adopted from (Russo & Vannella, 2024). Empirical results on 4 benchmark shows promising performances compared with some existing approaches. _______________________________________ ### Update after rebuttal: I appreciate the authors’ effort in rebuttal. Most of my concerns have been addressed. However, considering the manuscript would benefit from clearer exposition, in particular, with that of (Russo & Vannella, 2024), I will keep my score. Claims And Evidence: Largely yes, but some details can be provided for clarity. See comment sections for details. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: The reviewer didn’t implement the pseudo code in the local machine to redo the experimental studies, but went over the simulation results. Supplementary Material: The reviewer didn’t go over the supplementary material. Relation To Broader Scientific Literature: The paper studied policy evaluation reinforcement learning in the setting of multi-reward and multi-policy setting. It poses an interesting question of finding an efficient sampling strategy under such setting. The problem seems to be general and contains traditional RL as a special case. Essential References Not Discussed: The reviewer believes essential related works are there. Other Strengths And Weaknesses: Strength: The paper poses an interesting question of finding an efficient sampling strategy in the setting of multi-reward and multi-policy. The empirical results show promising results over existing algorithms. Weakness: See the comment section below. Other Comments Or Suggestions: 1. It is quite important to distinguish the work described in this work and that of (Russo & Vannella, 2024). This is because the algorithms are exactly the same, and in (Russo & Vannella, 2024), the problem is also about to identify a policy identification problem. With clear distinctions, it will position the current paper better. 2. Around line 145, the paper mentioned the considered target policies are deterministic policies. Will analysis for stochastic policy be significantly different? 3. In Line 211, it says confusing set will be close to M (in the KL sense). However, in the definition of the set of alternative models, only shows distant way by 2$\epsilon$ in infinite norm sense. Please clarify these concepts. 4. In Line 219, it’s unclear on what does it mean that $P(s,a)$ is continuous w.r.t. $P’(s,a)$ as $P(s,a)$ is not a function of $P’(s,a)$. Please elaborate. 5. The definition for $P’_r(s,a)$ is unclear. 6. Typo: Line 22 right column, different task -> different tasks 7. Typo: Line 69, a period is left out. Questions For Authors: Please see the comment section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s feedback and the time they dedicated to evaluating our paper. Below, we provide detailed responses to each of their concerns. > It is quite important to distinguish the work described in this work and that of (Russo \& Vannella, 2024). [...] With clear distinctions, it will position the current paper better. We thank the reviewer for the comment. Prior work has focused on the Best Policy Identification (BPI) problem, but the technique is far more general and can be applied to other problems. Our contribution is to show how the same broad approach can be applied to the multi-reward, multi-policy evaluation setting. The overall technique is inspired by approaches in Best Arm Identification (Garivier \& Kaufmann, 2016), where the problem is cast as a hypothesis-testing framework to distinguish the true model from the "worst'' confusing model (see also Degenne \& Koolen, 2019; Al Marjani et al., 2021). This perspective permits to compute an exploration strategy $w_t^*$ at each time step that maximizes the evidence gathered from the model (depending on the set of possible alternative models). While the general algorithmic structure is similar in all these works, the specific optimization problem in computing $w_t^*$ differs because of the way the lower bound is derived here (i.e., how the reward set and its geometry affect sample complexity). In BPI, the complexity depends on value gaps $\Delta_r^\pi(s,a)=V_r^\pi(s)-Q_r^\pi(s,a)$, whereas our analysis involves $\rho_r^\pi(s,s')=V_r^\pi(s')-\mathbb{E}_{s''\sim P(s,\pi(s))}[V_r^\pi(s'')]$. Hence, in BPI a different analysis is needed to derive the optimization problem. We will clarify these distinctions more explicitly in the final version of the paper. > Around line 145, the paper mentioned the considered target policies are deterministic policies. Will analysis for stochastic policy be significantly different? The analysis can be carried out with stochastic policies in a similar way (in the appendix we also have results for stochastic policies), and we will point this out in the paper. We chose to mainly work with deterministic policies to avoid overly cumbersome notation. > In Line 211, it says confusing set will be close to $M$ (in the KL sense). However, in the definition of the set of alternative models, only shows distant way by $2\epsilon$ in infinite norm sense. Please clarify these concepts. We thank the reviewer for the question and the opportunity to clarify this point. The set of alternative models $Alt_{\pi,r}^\epsilon$ contains all models $M'$ satisfying $\|V_{M_r}^\pi - V_{M_r'}^\pi\|_\infty > 2\epsilon$. The sample complexity is characterized by the ``worst'' model in this set, i.e., the one that maximizes the sample complexity. This is exactly what we do in Eq. (2): we solve an optimization problem that tries to find an alternative model that minimizes the KL-divergence between the transition function $P$ of $M$ and the transition function $P_r'$ of an alternative model $M_r'$. We will make sure to clarify this point in the paper. > In Line 219, it’s unclear on what does it mean that $P(s,a)$ is continuous w.r.t. $P'(s,a)$ as $P(s,a)$ is not a function of $P'(s,a)$. Please elaborate. What we mean is that $P(s,a)$ is an absolutely continuous measure with respect to $P'(s,a)$. In other words, for all $s'$ such that $P'(s'|s,a) = 0$, we have $P(s'|s,a)=0$. We will clarify this in the paper. > The definition for $P_r'(s,a)$ is unclear. We understand that this definition may seem odd. For an alternative model $M_r'$, we denote by $P_r'$ its transition function(the subscript $r$ is only added to highlight for which reward $r$ the transition function induces an alternative model). We will explicitly explain this notation in the paper. > Typo: Line 22 right column, different task $\to$ different tasks. Typo: Line 69, a period is left out. We appreciate the reviewer pointing out these typos. We will correct them in the paper. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the clarification and responses. I would like to keep my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer again for their feedback and hope our clarifications have addressed your concerns. If there remain any specific suggestions or additional improvements that could help strengthen the manuscript further, and potentially lead you to reconsider your score upward, we would be very grateful to know. We remain committed to further improving our manuscript.
null
null
null
null
null
null
Agent-Centric Actor-Critic for Asynchronous Multi-Agent Reinforcement Learning
Accept (poster)
Summary: This paper proposes an Agent-Centric Actor-Critic framework for asynchronous multi-agent reinforcement learning, which includes a module that addresses asynchrony without relying on padding. The proposed module incorporates agent-centric history encoders for independent trajectory processing and an attention-based centralized critic for integrating agent information. Experimental results demonstrate that ACAC outperforms existing methods in macro-action benchmarks. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental designs, particularly the ablation studies, raise several concerns. 1. The ACAC method includes additional encoders and a transformer module, which significantly increases the number of parameters. It is crucial to determine whether the observed performance improvements are attributable to the additional parameters. 2. The effectiveness of each modification in the proposed method has not been sufficiently verified through ablation studies. For instance, the paper integrates timestep information—an ablation comparing ACAC with and without timestep information, and previous methods with this timestep information, would help clarify its impact. Similarly, applying the modified GAE in a PPO version of Mac-IAICC would help demonstrate its contribution. 3. The current ablation study is somewhat unclear. For example, the authors appear to examine the negative effect of duplicated observations, but the ACAC-Duplicate condition does not accurately reflect previous methods, as they do not involve encoders or timestep updates. This might only indicate that the encoder mechanism, when poorly applied, leads to negative effects. 4. The appendix lacks implementation details for the proposed module, which would be beneficial for replicating and understanding the method. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper makes a contribution to improving the performance of asynchronous MARL by addressing the padding problem inherent in existing methods. The ACAC method may inspire further advancements in asynchronous MARL algorithms. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - The paper is well-written and structured, making it easy to follow. - Illustrative figures and graphs effectively enhance understanding of the setting and methodology. - The motivation for the work is clear, and the proposed method is concisely presented. Weaknesses: - A key concern is the practical applicability of the setting in MARL. The algorithm requires individually pre-defined macro-actions, which may not be realistic in a multi-agent environment, where the execution of one agent’s macro-action can be disrupted by other agents. For example, one agent’s macro-action may be hindered by others, causing the action to be incomplete or interrupted. The current macro-action framework does not explicitly account for the interactions between agents, which is critical in highly coordinated tasks where agents’ actions are interdependent. - The padding problem in asynchronous MARL seems relatively minor, and the proposed modification to the critic input does not appear to be significantly novel. - It remains unclear why the proposed method outperforms existing ones. The performance gains could stem from the additional parameters or the incorporation of time embeddings (as discussed in the experimental part above). Without isolating these factors, it is difficult to determine the source of the improvements. Furthermore, the use of self-attention seems unnecessary in this context. Other Comments Or Suggestions: No. Questions For Authors: 1. What is the total parameters in ACAC compared to the other baselines? 2. Do the other baselines also incorporate RNNs in their value function? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate your insightful comments and would like to clarify our contributions and address concerns as follows: ### **[Q1] Parameter Comparison** We agree that comparing parameters is important, so we compared the total number of parameters between ACAC and Mac-IAICC. The results are as follows: - Overcooked/Rand: ACAC (320k) > Mac-IAICC (241k) - Overcooked-Large/Rand: ACAC (874k) < Mac-IAICC (963k) Crucially, ACAC significantly outperforms Mac-IAICC in Overcooked-Large despite having *fewer* parameters. This strongly indicates ACAC's performance gains stem from its architectural design (more effective processing of asynchronous information), not just model capacity. &nbsp; ### **[Q2] Use of RNN in baselines** Yes, all macro baselines utilize RNNs as ACAC. &nbsp; ### **[W1] Practicality of Pre-defined Macro-Actions** Our framework doesn't assume uninterrupted macro-actions. In the environments used (e.g., Overcooked), actions *can* be interrupted by other agents (e.g., blocking). When this occurs, the action terminates early, allowing the agent to immediately choose a new macro-action based on the situation. This provides reactive adaptation to dynamic interactions. While the environment handles these interruptions, our paper focuses on **ACAC's ability to handle the resulting asynchrony** in decision points for effective learning in the CTDE framework. &nbsp; ### **[W2] Padding Issue & Critic Novelty** We appreciate the perspective on the padding problem. While padding is simple, our paper argues and results show its impact is significant in asynchronous MARL, especially complex tasks. It can cause misleading temporal information(padding obscures the actual time elapsed between an agent's valid observations, hiding crucial duration information), inaccurate history abstraction(reusing past observations via padding makes it hard to distinguish new vs repeated stale information, leading to inaccurate joint history representations), and hinder credit assignment. Our experiments (Figs 7, 8) consistently show ACAC (no padding) outperforms padding-based methods, especially in complex settings (Overcooked-Large), suggesting addressing padding is crucial, not minor. This motivated ACAC's design to handle asynchronous inputs directly. &nbsp; ### **[W3] Component Contribution Verification** We agree ablation studies are crucial, so we have performed ablations for time embedding ("No TE") and self-attention ("No SA", using MLP) across Overcooked and Overcooked-Rand environments. Since the complex Overcooked-Rand-B map showed the clearest and most significant performance differences between configurations, clearly highlighting component impacts, we focus on these results here for brevity. Full results will be included in the revised manuscript. We report the average final performances and standard errors over five random seeds. **Table 3**. Ablation study on time embedding (TE) and self-attention (SA). |ACAC | No TE | No SA | |--------|--------|--------| |212.42 ± 0.64 | 174.65 ± 18.62 | 136.69 ± 19.44 | These results show that removing either the TE or SA component severely degrades both stability and the final performance, confirming that each component is vital to ACAC's effectiveness. Regarding the PPO/GAE ablation, directly applying PPO to Mac-IAICC (N critics) is non-trivial. However, our ACAC vs. ACAC-Vanilla comparison (Sec 5.2, Fig 7) isolates the PPO objective's impact. They share the same architecture; only the optimization differs (PPO vs. simpler policy gradient). ACAC's superior performance demonstrates the benefit of our PPO-based objective. &nbsp; ### **[Experimental Designs 3] Validity of Ablation Comparison** This experiment aimed to isolate the negative impact of processing padded/duplicated information, which is difficult within standard padding methods. ACAC-Duplicate uses the ACAC architecture, including time embedding and per-agent history encoders, but adopts the padding method's history update rule (update on any agent's new observation). This creates a valid intermediate comparison, effectively showing the performance degradation caused by processing duplicated information within our agent-centric framework. &nbsp; ### **[Experimental Designs 4] Lack of Implementation Details** Key architectural details are in Sec 3/Fig 4, with hyperparameters in Appendix F/Table 1. Please feel free to ask if further hyperparameter details are needed. We will release the full source code publicly for reproducibility. --- Rebuttal Comment 1.1: Comment: Some follow-up questions: - How the framework decides when a macro-action is interrupted? For example, if two agents block each other—say, by attempting to occupy the same space in a narrow passage—will both of their macro-actions be interrupted, or does the system prioritize one agent over the other? - Another concern is whether this framework supports coordination between agents, which is essential in cooperative MARL. For instance, in a scenario where a path is blocked, an effective system might allow one agent to wait while the other passes, then proceed in turn. However, if macro-actions are predefined without considering inter-agent interactions—as they often are in single-agent settings—such coordination may not emerge naturally. This is my primary worry about the practical applicability of the approach. Directly extending single-agent macro-actions to cooperative MARL could undermine cooperation, as agents might pursue individual goals without adapting to each other's actions. - How are value and policy updates handled when macro-actions are interrupted? --- Reply to Comment 1.1.1: Comment: We thank Reviewer SHS8 for the thoughtful and detailed comments. We understand the concerns regarding the applicability of our MacDec-POMDP framework, particularly in scenarios requiring agent coordination and potential interruptions of macro-actions. It seems that the last paragraph of Section 2.2 may have caused some confusion. To clarify up front, macro-actions in our framework are not simple fixed sequences of micro-actions, but are instead defined using the **Options framework** within the Semi-Markov Decision Process (SMDP) setting, **with agent interactions explicitly considered**. The explanation below is intended to clarify this distinction and address the reviewer’s concerns. We will also revise the relevant part in the final version of the paper to avoid such confusion. We hope this response helps resolve any misunderstandings and provides a clearer view of how our method works in practice. &nbsp; ### **1. Recap: Definition of Macro-Action** In our framework, macro-actions are implemented as options in the Semi-Markov Decision Process (SMDP) framework [1]. Each macro-action is defined as $m = \langle \pi_m, I_m, \beta_m \rangle$, where: (i) $\pi_m (a|h_{\text{mic}})$ is an intra-option policy over micro-actions given a history of micro-observations $h_\text{mic}$; (ii) $I_m$ is the initiation set from which the macro-action can start (typically assumed to be all states, as in our work); (iii) $\beta_m(h_{\text{mic}})$ is the termination condition, giving the probability that the macro-action ends given the current micro-observation history $h_\text{mic}$. See Appendix D for more detail. Since $I_m$ is typically unrestrictive, the key behaviors are encoded in (i) $\pi_m$ and (iii) $\beta_m$. We explain how these components are used to support inter-agent interaction and adaptive behavior, using concrete examples from the Overcooked environment. &nbsp; ### **2. Example of Macro-Actions in the Overcooked Environment** To illustrate how inter-agent interaction is handled in practice, consider the Overcooked environment: - **Intra-option policy**: A macro-action like “go to tomato” may involve a path planning. If another agent is blocking the path, the intra-option policy handles this by having the agent wait until the path is clear, then continue. This behavior is naturally encoded into the policy without requiring the macro-action to terminate. - **Termination condition**: If the task goal becomes invalid—for instance, another agent picks up the tomato—the macro-action terminates automatically according to its predefined termination condition. These behaviors are not considered "interruptions" in the conventional sense. Rather, they are expected outcomes under the macro-action’s design. The inter-agent interaction and adaptation are achieved through well-defined intra-option policies and termination conditions, enabling cooperative behaviors to naturally emerge within the framework. &nbsp; ### **3. On Macro-Action “Interruptions”** As all macro-actions are designed with agent interaction in mind, including blocking or dynamic invalidation of goals, what might appear as “interruptions” are in fact **normal terminations** governed by the macro-action’s own $\beta_m$. Consequently, and in direct response to Reviewer SHS8’s third question, the learning of value functions and policies require no special handling of such cases. There is no distinction made between “complete” and “incomplete” executions, as all terminations are intentional under the defined option. &nbsp; ### **4. Practical Applicability of MacDec-POMDP** We acknowledge the reviewer’s concern regarding the emergence of coordination in cooperative settings. Although our macro-actions may resemble single-agent options, they are learned and executed entirely within a multi-agent context. Specifically, we integrate intra-option policies and termination conditions to handle inter-agent interactions, enabling agents to adapt to one another’s behaviors during execution. In doing so, our approach demonstrates that macro-actions can effectively address inter-agent dynamics, allowing coordination behaviors—such as yielding in narrow spaces—to emerge naturally through the learning process. &nbsp; We hope this final response clarifies how macro-actions are structured in our environments, how they enable coordination among agents, and how termination is handled during learning. Together with our earlier responses, we believe these address your concerns and offer a clearer understanding of our proposed approach. &nbsp; **References** [1] Sutton et. al., “Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning,” *Artificial Intelligence*, 1999.
Summary: This paper introduces Agent-Centric Actor-Critic (ACAC), a novel algorithm designed for asynchronous multi-agent reinforcement learning (MARL) in environments with sparse rewards and varying macro-action durations. Each agent's trajectory is processed independently, capturing the history of macro-observations along with their timesteps, enabling accurate temporal abstraction without padding. Meanwhile, an attention mechanism integrates the individual agent histories into a centralized critic, allowing for more accurate value estimation in asynchronous settings. This paper adapt the modified GAE to handle irregular intervals between macro-observations, ensuring effective policy optimization in asynchronous environments. This paper evaluates ACAC on several macro-action-based benchmarks, such as BoxPushing and Overcooked, demonstrating that ACAC achieves faster convergence and higher final returns compared to baseline methods. Claims And Evidence: Yes. The paper provides theoretical and empirical evidence for its main claims. This paper claims that ACAC generalizes well to more complex and randomized environments. This evidence is strong, but additional experiments with larger complexity, such as SMAC, could further validate the generalization capabilities. Methods And Evaluation Criteria: The methods and evaluation criteria are well-designed and appropriate for the problem. The choice of benchmarks, baseline comparisons, and ablation studies supports the validity of the approach. Theoretical Claims: The paper does not present formal theoretical proofs in the traditional sense, but it does provide detailed derivations and justifications for key algorithmic components. In the modified GAE section and agent-centric history encoder and centralized critic section, it would be beneficial to show that the modified GAE preserves the convergence guarantees of the original GAE or to discuss any potential limitations introduced by the asynchronous adaptation. Experimental Designs Or Analyses: The paper presents a well-designed set of experiments, Overcooked and BoxPushing, to evaluate the proposed Agent-Centric Actor-Critic (ACAC) algorithm. The baselines such as Mac-NIACC, Mac-IAICC, and MAPPO are appropriate. However, I believe a much harder environment such as SMAC will be more beneficial. Supplementary Material: Yes, I review all the supplementary materials. Relation To Broader Scientific Literature: The paper's contributions are related to MARL and HRL. By addressing the challenges of asynchronous MARL with macro-actions, avoiding the pitfalls of padding-based methods, and adapting GAE for asynchronous settings, the paper extends the state-of-the-art in the field. The use of attention mechanisms and rigorous empirical validation further strengthens the paper's contributions. Essential References Not Discussed: I believe the references are sufficient. Other Strengths And Weaknesses: * Strengths: 1. The paper introduces a novel approach to handling asynchronous multi-agent reinforcement learning (MARL) by avoiding the common padding technique, which is a significant departure from existing methods. 2. The proposed Agent-Centric Actor-Critic (ACAC) algorithm has the potential to significantly impact real-world applications where agents operate asynchronously. 3. The adaptation of Generalized Advantage Estimation (GAE) to asynchronous settings is an original contribution that addresses a gap in the literature. * Weaknesses: 1. While the paper provides detailed derivations for the modified GAE, it lacks formal theoretical guarantees or convergence proofs for the proposed methods. 2. The paper could benefit from a sensitivity analysis to show how the performance of ACAC varies with different hyperparameter settings. 3. While the chosen benchmarks are appropriate, including additional environments could further validate the generalizability of ACAC, such as SMAC and its variants. Other Comments Or Suggestions: NA Questions For Authors: 1. Can the authors provide formal theoretical guarantees or convergence proofs for the proposed Agent-Centric Actor-Critic (ACAC) algorithm, particularly for the modified GAE in asynchronous settings? 2. How does the agent-centric approach compare to padding-based methods in terms of information loss or distortion? 3. Can the authors provide evaluations on the SMAC environment? Choose some representative tasks if possible. 4. Will the ACAC be influenced by mixed opponent policies? I suggest conducting an experiment on SMAC-HARD to test whether ACAC is free from opponent strategy. Choose some representative tasks if possible. If you respond to these questions and address these concerns, I'll be willing to raise the score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your thoughtful remarks and would like to provide clarification on our contributions and address the raised concerns. &nbsp; ### **[Q1,W1] Analysis of Modified GAE** Similar to the original GAE, our proposed method does not guarantee convergence in general. However, just as the original GAE converges when $\lambda$ equals 0 or 1, our proposed GAE also inherently converges in these specific cases. The rationale behind using macro-action-based $\lambda$-discounting in GAE within ACAC is detailed in our response to reviewer Cf7Q (Analysis of Modified GAE); please refer to our comments there for further clarification. &nbsp; ### **[Q2] Information Loss Comparison** Given an episode obtained using macro-actions, both ACAC and padding-based methods theoretically acquire the same amount of information. However, the padding-based approach continuously uses information from observations that are not actually collected, making it difficult to distinguish between cases where no new information is obtained and those where identical information from a previous observation is repeated. This can result in information distortion. To eliminate such distortion without losing information, ACAC employs an agent-centric encoder that utilizes only each agent's available information for history encoding. This structure allows ACAC to more accurately estimate joint histories compared to padding-based methods, thereby achieving superior performance. &nbsp; ### **[Q3,W3] Evaluations on the SMAC** We agree with your point that evaluating our approach in environments like SMAC could enhance the persuasiveness of our results. Indeed, several studies have explored hierarchical approaches in SMAC; however, most of them have focused on synchronous scenarios where macro-actions have a fixed, identical duration for all agents. Our work, in contrast, specifically addresses scenarios involving asynchronous macro-actions, where each macro-action has varying durations. Consequently, conducting performance comparisons in synchronous SMAC environments would not meaningfully reflect the contributions of our method. To compare ACAC and the baseline methods effectively in SMAC under asynchronous settings, we would need to explicitly define macro-actions with varying durations. Unfortunately, it is not feasible to develop and test such an asynchronous version of SMAC within the limited time frame available for this rebuttal. Nevertheless, we are actively working on developing an asynchronous variant of SMAC. We strongly anticipate that ACAC will exhibit superior performance over existing baselines, such as Mac-IAICC and Mac-NIACC, once evaluated in this asynchronous SMAC environment. &nbsp; ### **[Q4] Influence by Opponent Policies** Thank you very much for suggesting an evaluation in environments like SMAC-Hard, where multiple opponent policies are mixed. Similar to our earlier explanation regarding SMAC, we believe conducting evaluations in such environments would be meaningful only within an asynchronous version of SMAC. This, however, requires developing a dedicated asynchronous SMAC environment. Additionally, effectively responding to various opponent strategies, as seen in SMAC-Hard, would necessitate developing a module explicitly designed for opponent strategy inference. While this is beyond the current scope of our research, which primarily targets asynchronous MARL scenarios, we anticipate that ACAC's per-agent history encoder could be extended or enhanced to perform opponent strategy inference. This capability would potentially enable the model to adapt effectively to different opponent strategies, representing an exciting and promising direction for future research. &nbsp; ### **[W2] Sensitivity Analysis on Hyperparameter** We agree your suggestion, so we have performed sensitivity analysis on hyperparameter, clipping ratio and GAE $\lambda$(micro-level vs macro-level $\lambda$-discounting), across Overcooked and Overcooked-Rand environments. Since the complex Overcooked-Rand-B map showed the clearest and most significant performance differences between configurations, clearly highlighting component impacts, we focus on these results here for brevity. Full results will be included in the revised manuscript. We report the average final performances and standard errors over five random seeds. **Table 1**. Sensitivity analysis on hyperparameter: clipping ratio (ε). | ACAC ( ε=0.01) | ε=0.005 | ε=0.015 | |---------------------|----------------|------------| | 212.42 ± 0.64 | 188.73 ± 14.99 | 209.62 ± 3.33 | **Table 2**. GAE $\lambda$ comparison. | ACAC | GAE with Micro-level $\lambda$-discounting | |------------|------------| | 212.42 ± 0.64 |132.98 ± 19.02 | The results shows that performance is robust to clipping ratio variations; ACAC's macro-action based GAE discounting outperforms the original timestep-based discounting.
Summary: This paper proposes the Agent-Centric Actor-Critic (ACAC) algorithm to address asynchronous multi-agent reinforcement learning (MARL) in sparse-reward environments with macro-actions. The key innovation lies in replacing padding-based centralized critics with agent-centric history encoders and attention-based aggregation, which avoids spurious correlations from padded observations. ACAC introduces a modified GAE for asynchronous settings and demonstrates superior performance on macro-action benchmarks like BoxPushing and Overcooked, showing faster convergence and higher returns compared to baselines. Claims And Evidence: The superiority of ACAC over padding-based methods is supported by experiments. The ablation study (ACAC-Duplicate) convincingly demonstrates padding’s negative impact. Methods And Evaluation Criteria: The agent-centric encoder and attention-based critic are well-justified for asynchronous settings. The modified GAE adapts standard GAE to variable intervals logically. Benchmarks selected are appropriate for demonstrating algorithm efficacy. Theoretical Claims: The theoretical foundations are basically sound. Experimental Designs Or Analyses: The experiments are thoughtfully designed and robust in demonstrating the method's strengths. Broader scenario testing would better illustrate the adaptability of the techniques. Supplementary Material: I have read the Supplementary Materials, including MacDec-POMDP definitions, GAE derivations, hyperparameters, etc. Relation To Broader Scientific Literature: The paper contextualizes its work within macro-action MARL and CTDE frameworks, citing key works (Xiao et al., 2020a; Amato et al., 2019). Essential References Not Discussed: N/A Other Strengths And Weaknesses: Paper Strength ACAC introduces agent-centric history encoders with timestep embeddings and an attention-based critic, effectively addressing the misalignment caused by padding in asynchronous MARL. This architecture is novel and well-motivated. Extensive experiments on diverse benchmarks (including randomized and large-scale variants) demonstrate ACAC’s advantages in convergence speed, final performance, and generalization. Ablation studies (e.g., ACAC-Duplicate) confirm the necessity of avoiding padding.The modified GAE formulation for asynchronous MARL is theoretically justified and empirically validated, addressing irregular macro-observation intervals. The paper clearly identifies limitations of padding-based methods (e.g., spurious correlations) and provides a structured comparison of ACAC’s workflow versus baselines Paper Weakness While the modified GAE is motivated empirically, its convergence properties or bias-variance trade-off in asynchronous settings lack formal analysis. All experiments focus on grid-world tasks (BoxPushing, Overcooked). Real-world applicability (e.g., continuous control) remains unverified. Other Comments Or Suggestions: N/A Questions For Authors: please see the weakness Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for your feedback and would like to offer a detailed explanation of our contributions while addressing the concerns raised. ### **[W1] Analysis of Modified GAE** It is known that the original GAE does not guarantee convergence in general. However, specific boundary cases are clearly defined: when $\lambda$=0, GAE reduces to the TD-error; when $\lambda$=1, it simplifies precisely to the empirical return. Similarly, the proposed GAE in our paper maintains this important property—when $\lambda$=1, it equals the empirical return, and when $\lambda$=0, it becomes the multi-step TD-error between consecutive decision points ($\delta_{l(k)}$ in our formulation, where $l(k)$ is the $k$-th timestep a new observation becomes available for any agent). In MacDec-POMDP settings featuring temporal abstraction via macro-actions, there can be two choices for designing the advantage function based on how $\lambda$-discounting is applied: - 1. Applying $\lambda$-discounting at the **micro-timestep level**: This approach discounts future TD errors based on the number of primitive timesteps elapsed. - 2. Applying $\lambda$-discounting at the **macro-timestep level**: This approach discounts future TD errors based on the number of macro-action decision steps taken. - Let's illustrate using an example where macro-observations (and value estimates $V_t$) are obtained at timesteps 0, 2, 5 and 6: - Rewards: $r_0, r_1, r_2, r_3, r_4, r_5, r_6, \dots$ - Values: $V_0, -, V_2, -, -, V_5, V_6, \dots$ - Multi-step TD errors between decision points: - $\delta_0 = r_0 + \gamma r_1 + \gamma^2 V_2 - V_0$ - $\delta_2 = r_2 + \gamma r_3 + \gamma^2 r_4 + \gamma^3 V_5 - V_2$ - $\delta_5 = r_5 + \gamma V_6 - V_5$ - Below are the resulting advantage calculations at t=0 for each approach: - (1) Micro-level $\lambda$-discounting : $A^{\lambda, \text{micro}}_0 := \delta_0 + (\lambda^{2} \gamma^{2}) \delta_2 + (\lambda^{5} \gamma^{5}) \delta_5 + \dots$ - (2) Macro-level $\lambda$-discounting : $A^{\lambda, \text{macro}}_0 := \delta_0 + (\lambda \gamma^{2}) \delta_2 + (\lambda^{2} \gamma^{5}) \delta_5 + \dots$ We adopted the second approach (macro-level $\lambda$-discounting) for ACAC, as it emphasize both the significance of future rewards and the critical decision points associated with macro-actions. As in Table 2 of response for RGMo (Sensitivity Analysis on Hyperparameter), this approach effectively handles the variable intervals in our asynchronous setting and contributes to the strong performance observed. &nbsp; ### **[W2] Real-world Applicability** Thank you for raising this important point regarding the evaluation environments. We agree that verifying applicability beyond grid-world tasks is crucial. While settings requiring asynchronous coordination are common in the real world, the field of Asynchronous MARL, especially involving macro-actions, is still emerging. Consequently, there is currently a limited set of established benchmarks available, most of which are grid-based environments like BoxPushing and Overcooked. We aimed for a comprehensive evaluation within the current scope of the field and thus tested our ACAC method on these known and publicly available benchmarks specifically designed for macro-action-based asynchronous MARL. We recognize the importance of demonstrating applicability in more realistic scenarios. Extending our work to real-world problems is a key direction for our future research.
Summary: This paper tackles the challenges encountered in asynchronous multi-agent reinforcement learning (MARL) arising from the use of macro-actions with varying durations. In traditional Centralized Training with Decentralized Execution (CTDE) frameworks, a padding technique is often used to fill in missing macro-observations. However, such padding can introduce redundancy and misleading correlations in the history representation. To address this, the authors propose the Agent-Centric Actor-Critic (ACAC) algorithm. ACAC utilizes individual history encoders for each agent to process its own macro-observation trajectory, and an attention-based aggregation module to integrate these histories into a centralized critic. Additionally, the algorithm incorporates a modified Proximal Policy Optimization (PPO) objective and an adapted version of Generalized Advantage Estimation (GAE) suitable for asynchronous settings. The experimental results, conducted on several benchmark tasks including BoxPushing, Overcooked, Overcooked-Rand, and the more challenging Overcooked-Large, demonstrate that ACAC converges faster and achieves higher returns compared to padding-based baselines. Claims And Evidence: The paper makes several key claims: 1. Problem Identification: The conventional padding method in asynchronous MARL introduces redundant and inaccurate information into the joint history representation, thus impairing the effectiveness of the centralized critic. 2. Methodological Contribution: By employing agent-centric history encoders and an attention-based aggregation mechanism, ACAC can directly utilize the latest available macro-observations without resorting to padding. 3. Performance Improvement: The proposed ACAC, when combined with a modified PPO objective and an adapted asynchronous GAE, outperforms existing padding-based approaches in terms of convergence speed, stability, and final performance. The experimental results, including learning curves, ablation studies (e.g., comparison with ACAC-Duplicate), and evaluations across various environments, provide clear and convincing evidence supporting these claims. In particular, the superior performance in complex scenarios like Overcooked-Large reinforces the paper’s assertions about the benefits of the proposed method. Methods And Evaluation Criteria: The methodological design and evaluation criteria in the paper are well-motivated and appropriate: 1. Method Design: The authors identify the core issue in asynchronous settings—the irregularity in receiving new macro-observations—and address it by designing an agent-centric encoder that processes each agent’s macro-observation along with its associated timestamp. This enriched representation allows for a more accurate reconstruction of the agents’ histories. The subsequent attention-based aggregation module enables the centralized critic to combine these individual histories without the need for padding. 2. Evaluation Criteria: The paper employs a diverse set of benchmark environments that vary in complexity (from BoxPushing to Overcooked variants). The experiments are conducted using multiple random seeds and include metrics such as convergence speed and final returns. Additionally, ablation studies are performed to isolate the effects of padding versus non-padding, providing further empirical support for the proposed approach. Theoretical Claims: The theoretical contributions primarily focus on the derivation of an adapted GAE for asynchronous MARL: 1. Theoretical Derivation: The authors provide a detailed derivation of a modified GAE formulation that accounts for irregular intervals between macro-observations. By introducing a variable that captures these intervals, the new formulation properly adjusts the temporal difference errors and advantage estimates. 2. Assessment: The derivation is logically sound and recovers the standard GAE formulation as a special case when the intervals are uniform. While the derivation is complex, the underlying assumptions and steps are clearly articulated. It would be beneficial for the final version to further clarify any underlying assumptions about the distribution of the time intervals and boundary conditions to ensure readers fully grasp the scope of the theoretical results. Experimental Designs Or Analyses: The experimental design and analysis are robust and comprehensive: 1. Experimental Setup: The paper evaluates ACAC across multiple environments (BoxPushing, Overcooked, Overcooked-Rand, and Overcooked-Large) that effectively capture the challenges of asynchronous MARL. The inclusion of both standard and randomized scenarios demonstrates the method’s generalization capabilities. 2. Data Analysis: Learning curves, accompanied by mean and standard error statistics, clearly illustrate the performance improvements of ACAC over baseline methods. The ablation studies, particularly the comparison between ACAC and a variant that duplicates macro-observations (ACAC-Duplicate), effectively show the detrimental effects of padding in the learning process. 3. Recommendations for Improvement: Future work might explore the extension of ACAC to continuous action spaces and larger-scale multi-agent systems. Additionally, a discussion on hyperparameter sensitivity and computational efficiency would further enhance the experimental analysis. Supplementary Material: I reviewed the supplementary material thoroughly. In particular, I focused on: Appendix B: Detailed derivation of the asynchronous Generalized Advantage Estimation (GAE), which clarifies how the modified GAE adapts to irregular macro-observation intervals. Appendix C: A comparison between padding-based methods and the proposed ACAC, providing a deeper understanding of the advantages of the agent-centric approach. Appendix D: The formal definition of the MacDec-POMDP framework, which helps ground the theoretical discussion in a precise problem formulation. Appendix E: Additional experimental results (ablation studies). Relation To Broader Scientific Literature: The key contributions of the paper are well-situated within the broader literature: 1. Macro-actions and Hierarchical RL: The paper builds on the established ideas of macro-actions and the options framework (e.g., Sutton et al., 1999) as well as hierarchical reinforcement learning methods. It extends these ideas by addressing the challenges of asynchrony, a topic that has been explored in works such as Amato et al. (2014, 2019) and Xiao et al. (2020, 2022). 2. CTDE and Multi-Agent Coordination: The work leverages the Centralized Training with Decentralized Execution (CTDE) framework—common in multi-agent RL—to propose an innovative agent-centric design. This is related to recent advances in methods like MAPPO (Yu et al., 2022) and other actor-critic variants that tackle coordination issues in multi-agent settings. 3. Modified GAE: The adaptation of Generalized Advantage Estimation for asynchronous settings ties back to foundational work by Schulman et al. (2016), but the modification to account for variable macro-observation intervals is novel and addresses a gap in existing methods. 4. Attention Mechanisms: The use of attention-based aggregation to combine agent histories connects with broader trends in using attention (e.g., Vaswani et al., 2017) for handling sequence data, which is increasingly common in multi-agent communication and coordination research. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. Originality: The agent-centric approach to avoid padding issues in asynchronous MARL is innovative and addresses a well-known challenge in the field. 2. Theoretical Rigor: The derivation of the modified GAE for asynchronous settings is detailed and, overall, mathematically sound. 3. Empirical Validation: The experiments are comprehensive, spanning several benchmark environments and including ablation studies that clearly demonstrate the benefits of the proposed method. Weaknesses: 1. Complexity of Theoretical Derivations: Some parts of the derivation (especially the asynchronous GAE) are complex and may benefit from additional intuitive explanations or visual aids. 2. Scalability and Computational Overhead: While the experiments are convincing, more discussion on the computational cost and scalability of the attention-based aggregation, especially in larger agent populations, would be helpful. 3. Extension to Continuous Actions: The paper acknowledges that extending ACAC to continuous action spaces is an open challenge. More discussion on potential approaches or anticipated difficulties could strengthen the contribution. Other Comments Or Suggestions: 1. Supplementary Material Organization: Consider reorganizing the supplementary material for clearer navigation. For instance, clearly labeling each appendix section with a brief overview of its contents could help readers. 2. Minor Typos and Clarity: I noticed a few minor typos and formatting issues in the text; a careful proofreading would enhance readability. 3. Visualization of Encoder Dynamics: Including visualizations of the agent-centric encoder’s hidden state evolution or attention weight distributions could provide deeper insights into how the aggregation module improves history representation. Questions For Authors: 1. Can the authors provide qualitative analyses (e.g., visualizations of attention weights or hidden state dynamics) that illustrate how the agent-centric encoder and aggregation module enhance the representation of asynchronous histories compared to the padding-based approach? A detailed response and accompanying visualizations would help clarify the internal workings of the proposed model and further justify the design choices, potentially increasing the paper's impact. 2. Have the authors conducted sensitivity analyses on key hyperparameters such as the clipping ratio in PPO and the λ parameter in the modified GAE? How robust is the performance of ACAC across different settings? Insights into hyperparameter robustness would strengthen the empirical evidence and provide practical guidance for applying the method in various settings. 3. Could the authors elaborate on the potential challenges and necessary modifications for extending ACAC to continuous action spaces? Are there any preliminary experiments or theoretical insights in this direction? A clearer discussion on this topic would help situate the current work within broader applications and guide future research efforts. 4. What assumptions, if any, are made regarding the distribution or variability of the time intervals between macro-observations? Could extreme variability in these intervals affect the validity of the modified GAE formulation? Clarifying these assumptions would help determine the generality of the method and its applicability to diverse asynchronous environments. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Your valuable comments are much appreciated. In response, we aim to clarify our contributions and address the points of concern. ### **[Q1] Qualitative Analysis Request** Thank you for the insightful suggestion to correlate attention scores with behavior. This is an excellent direction for future work to gain deeper understanding. &nbsp; ### **[Q2] Hyperparameter Sensitivity** We conducted experiments to analyze hyperparameter sensitivity. Due to space limitations in this response, we have detailed these results in our separate response provided to reviewer RGMo (Sensitivity Analysis on Hyperparameter). We kindly request your understanding and refer you to that specific comment for the detailed results. &nbsp; ### **[Q3,W3] Continuous Action Space** We believe ACAC is structurally suitable for continuous actions with standard actor-critic modifications. The main challenge is the current lack of asynchronous MARL benchmarks with continuous action spaces and defined macro-actions. Defining these requires careful effort, preventing results within the rebuttal period, but it remains important future work. &nbsp; ### **[Q4,W1] Analysis of Modified GAE** Similar to the original GAE, our proposed method does not guarantee convergence in general. However, just as the original GAE converges when $\lambda$ equals 0 or 1, our proposed GAE also inherently converges in these specific cases. The rationale behind using macro-action-based $\lambda$ discounting in GAE within ACAC is detailed in our response to reviewer Cf7Q (Analysis of Modified GAE); please refer to our comments there for further clarification. &nbsp; ### **[W2] Computational Complexity of ACAC Structure** Defining N=#agents, K=#observations per agent, Z=obs dim, H=hidden dim: - Actor: Complexity is nearly identical, aside from ACAC's minor time embedding overhead. - Critic: - History Encoding - Mac-IAICC: Encodes the concatenated joint history (size NZ) whenever any agent gets a new observation (up to NK times total). This leads to a worst-case complexity of O(N²KZ). - ACAC: Processes each agent's observation (size Z) individually through its dedicated encoder. Across all agents and observations (NK total), the complexity is O(NKZ). This scales better than Mac-IAICC's encoding by a factor of N. - Thus, ACAC's history encoding scales linearly with N (O(NKZ)), offering better scalability than the quadratic O(N²KZ) of joint encoding, especially for larger N. - Value Estimation - Mac-IAICC: Typically uses an MLP processing aggregated hidden features (size NH). Computed up to NK times, giving O(N²KH) total complexity. - ACAC: Employs an attention mechanism over N agent representations (size H), requiring O(N²H) computation per step where an observation arrives. In the worst case (NK steps), this totals O(N³KH) complexity. - While a simpler MLP aggregation (as in Mac-IAICC) offers one potential alternative for scalability, potentially trading off some performance, another promising direction for future work is to explore the integration of efficient attention mechanisms [1] that reduce complexity (e.g., to linear O(NH) per step) while aiming to retain the expressive capacity for modeling inter-agent dependencies. - Practical Runtime: Despite theoretical differences, the observed runtimes for ACAC and Mac-IAICC were similar in our experiments (N=3, 6). This suggests ACAC's more efficient history encoding helps offset the computational cost of the attention module in practice for these agent populations. [1] Wang, S., Li, B. Z., Khabsa, M., Fang, H., & Ma, H. (2020). Linformer: Self-Attention with Linear Complexity. arXiv preprint arXiv:2006.04768.
null
null
null
null
null
null
Geometric Generative Modeling with Noise-Conditioned Graph Networks
Accept (poster)
Summary: The paper introduces ​Noise-Conditioned Graph Networks (NCGNs), a class of graph neural networks that dynamically adapt their architecture based on the noise level during flow-based generative modeling of geometric graphs. The key innovation is ​Dynamic Message Passing (DMP), which adjusts both the connectivity range (e.g., k-nearest neighbors) and graph resolution (via coarse-graining) as a function of noise. Claims And Evidence: **Strengths:** ​ - It is the first work to formalize noise-dependent graph architecture adaptation for geometric generative modeling. Integrates theoretical insights (mutual information analysis) with practical algorithmic design (DMP). - Comprehensive Experiments: Validated on 3D shapes, biological data, and images, demonstrating broad applicability. ​Theoretical-Experimental Alignment: Attention weight analysis (Fig. 3) and Gromov-Wasserstein distance measurements (Fig. 4) empirically confirm theoretical predictions. ​**Weaknesses:** - The ​scheduler for adjusting resolution/connectivity (Fig. 5) is predefined (e.g., exponential), not learned. This may limit optimality across diverse datasets. - Coarsening via voxel clustering or K-means could introduce artifacts; learnable pooling (e.g., DiffPool) might improve performance. ​Baseline Comparisons: Methods And Evaluation Criteria: The method and Evaluation Criteria in this paper seems to be sound. Theoretical Claims: Correlation function in Theorem 3.2 is idealized; real-world spatial correlations may be more complex. Experimental Designs Or Analyses: The experimental design seems to be valid. Supplementary Material: Yes, Theorem 3.2. Relation To Broader Scientific Literature: - Essential References Not Discussed: - Other Strengths And Weaknesses: - Other Comments Or Suggestions: The paper presents a ​theoretically grounded and empirically validated method for noise-adaptive graph generation. While some assumptions simplify real-world complexity, the core claims are well-supported, and the approach demonstrates clear practical value. Future work could explore learned schedulers and broader baseline comparisons. Questions For Authors: **Q1:** The paper uses predefined schedulers (e.g., exponential) to interpolate between connectivity and resolution boundaries. Have you explored learned schedulers (e.g., via meta-learning or reinforcement learning) to dynamically optimize $r_t$ and $s_t$ during training? If not, do you foresee such approaches improving performance, particularly in domains where noise dynamics are non-monotonic or dataset-specific? **Q2:** Recent works like E(n)-GNNs and Equivariant Diffusion models explicitly encode geometric symmetries (e.g., rotation/translation invariance) into graph networks. How does DMP complement or contrast with these methods? Is there a way to integrate equivariance constraints into NCGNs, is it possible that this would enhance performance on tasks like molecular generation, where symmetry preservation is critical? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time, effort, and constructive feedback. We address the reviewer’s concerns and questions below: > Learned scheduler **Implementing learnable schedules is difficult** because the schedule is used for kNN graph construction and coarse-graining procedures, which involve non-differentiable operations (sorting, clustering, discrete assignments) that are not amenable to standard backpropagation. Instead of relying purely on heuristics, we conducted ablation studies in Table 2 and Section 5.1.1 comparing four scheduling functions (linear, exponential, logarithmic, ReLU), finding that all significantly outperform fixed baselines, with exponential consistently performing best. While developing differentiable approximations for these operations represents an exciting future direction, we believe it warrants dedicated study beyond our current paper's scope. > Coarsening via voxel clustering or K-means could introduce artifacts; learnable pooling (e.g., DiffPool) might improve performance We agree that learnable pooling methods like DiffPool could potentially enhance performance. We emphasize that NCGN is a general framework where specific implementation details (including coarsening operations) can be customized. We intentionally selected simple, deterministic pooling methods to clearly demonstrate our core contribution—the benefit of noise-conditioned architectures—without confounding factors from complex learnable components. Our experimental results show that even with these basic coarsening operations, NCGNs deliver significant performance improvements across multiple domains. Future work could certainly explore more advanced coarsening operations within the NCGN framework. > E(n)-GNNs and Equivariant Diffusion models explicitly encode geometric symmetries [...]. Is there a way to integrate equivariance constraints into NCGNs? **DMP complements these equivariant methods**. As a general framework, NCGNs can incorporate any type of GNN layer, including equivariant ones. For instance, DMP could be implemented with equivariant layers like those in E(n)-GNNs, SchNet, or DimeNet that incorporate pairwise distances between nodes. Our theoretical analysis in Section 3 could also inform best practices for equivariant architectures—for example, many equivariant models operate on radius graphs, and our work provides principled guidance on how the radius should vary with noise level. While further investigation is needed, we believe the performance gains from NCGNs could potentially translate to equivariant models as well, making the application of NCGNs in structural biology and other geometric domains a promising direction for future research. > Correlation function in Theorem 3.2 is idealized; real-world spatial correlations may be more complex. We agree that the correlation function used in Theorem 3.2 is a simplified model of spatial correlation. This simplification was necessary to make theoretical analysis tractable while still capturing the essential property that correlation between nodes decreases with distance. Importantly, we designed our work to not rely solely on theoretical guarantees from idealized assumptions. Section 3.2 provides empirical analysis on real 3D geometric objects, demonstrating that our key insights about optimal reception fields and resolution scaling with noise also hold true in practice. > Broader baseline comparisons Our primary contribution is the noise-conditioning framework of NCGNs that can enhance existing architectures including many SOTA approaches. We implement NCGN on top of widely used architectures like GCNs and GATs and compare NCGN against general connectivity patterns including sparse connections (kNN), long and short connections, and full (quadratic) connections. In fact, **these effectively generalize many SOTA approaches**: the fully-connected GAT baseline is effectively a graph transformer and the long-short range baselines capture similar connectivity patterns as architectures with virtual nodes. Furthermore, Section 5.3 demonstrates that incorporating noise-conditioning to a SOTA transformer-based model results in significant performance gains. --- Thank you again for your helpful feedback. We welcome any additional discussions.
Summary: This paper introduces Noise-Conditioned Graph Networks (NCGNs), a generative modeling approach for geometric graphs that dynamically adjusts graph structure based on noise levels rather than keeping it fixed throughout the process. The authors propose a method to adapt how information flows through the graph depending on the noise intensity. At the core of this approach is Dynamic Message Passing (DMP), which modifies both the connectivity range (how far messages travel) and the resolution (level of detail in the graph representation) as noise changes. When noise is high, the model broadens connections and simplifies the graph structure; as noise decreases, it shifts to a more refined representation. The authors support this design with theoretical insights, showing that stronger noise necessitates information aggregation from more distant nodes and that coarser graph structures can reduce complexity while preserving essential information. To validate the approach, the authors evaluate DMP on 3D shape generation, spatial transcriptomics, and image generation, demonstrating that it consistently outperforms existing graph-based generative models. Importantly, the method maintains linear-time complexity, making it scalable for large datasets. This work highlights the importance of structured noise in generative models, arguing that their graph representations should evolve accordingly. The results suggest that noise-adaptive architectures lead to more expressive and efficient generative models across different domains. Claims And Evidence: Most claims are well-supported by theory and experiments, but some areas need further validation. The theoretical analysis and empirical results strongly support the idea that noise-adaptive message passing improves generative modeling. Experiments across 3D shape generation, spatial transcriptomics, and image generation show DMP consistently outperforms fixed architectures, with clear improvements in Wasserstein distance. The paper also convincingly explains how its scheduling strategy ensures linear-time complexity. However, some claims require more justification. The theoretical analysis assumes isotropic Gaussian noise and continuous graphs, which may not hold in all real-world settings. More validation on structured or irregular noise would strengthen this claim. The choice of an exponential scheduling function is somewhat heuristic, and a deeper study of learned schedules could provide stronger justification. While the method performs well on tested datasets, its applicability to other domains like molecular modeling or social networks remains uncertain. A discussion of these limitations would improve clarity. This I'll get back to later. Overall, the paper presents strong evidence for its approach, but addressing theoretical assumptions, adaptive scheduling choices, and broader applicability would further strengthen the claims. Methods And Evaluation Criteria: The methods and evaluation criteria align well with the problem. The authors use relevant benchmark datasets—ModelNet40, spatial transcriptomics, and ImageNet—covering diverse structured data types. Metrics like Wasserstein distance and FID are appropriate for measuring distribution alignment and sample quality. While comparisons to baseline graph-based models are useful, evaluating against other adaptive GNN architectures could further strengthen the analysis. Theoretical Claims: As far as I can tell, the proofs are done really well and are correct. Experimental Designs Or Analyses: Nothing to add here. Supplementary Material: I reviewed the supplementary material, focusing on the theoretical proofs, additional experimental details, and implementation specifics. Overall, the supplementary material strengthens the paper by providing detailed theoretical justifications and implementation insights. Relation To Broader Scientific Literature: The paper builds on and extends several key areas in generative modeling, geometric deep learning, and adaptive graph structures: - Flow-Based Generative Models: The work extends diffusion models (Ho et al., 2020) and flow matching (Lipman et al., 2022) by introducing a noise-adaptive graph structure, rather than using a static neural network across the generative process. This aligns with prior work in score-based generative modeling (Song & Ermon, 2019) but introduces a dynamic graph representation to improve expressivity. - Graph-Based Generative Modeling: Prior works on graph generative models (Corso et al., 2022; Xu et al., 2022) typically use fixed message-passing radii (e.g., k-NN graphs) during training. This paper challenges that assumption, showing that adjusting graph connectivity and resolution based on noise level leads to better representation learning. - Multi-Scale Graph Representations: The use of coarse-graining at high noise levels connects to ideas in hierarchical graph representations (Li et al., 2020) and graph signal processing (Oono & Suzuki, 2020). The paper strengthens the case that adaptive resolution improves generative modeling, a principle also explored in neural operators for PDEs. Essential References Not Discussed: - The paper should discuss adaptive GNNs (e.g., Li et al., 2018; Shirzad et al., 2022) as related approaches for dynamically adjusting graph connectivity. - Noise-aware graph learning (e.g., Luo et al., 2021; Zhang et al., 2020) is conceptually close to NCGNs and could be cited. - Hierarchical graph pooling methods (e.g., Ying et al., 2018; Defferrard et al., 2019) share similarities with DMP's coarse-graining approach. - Multi-resolution diffusion models (e.g., Zhao et al., 2023) explore resolution adaptation in generative modeling and could strengthen the discussion around DiT-DMP. Other Strengths And Weaknesses: The paper presents a compelling and well-motivated approach to generative modeling of geometric graphs by introducing Noise-Conditioned Graph Networks (NCGNs). The core idea—that graph neural network architectures should adapt dynamically to the noise level—is both intuitive and well-supported by theoretical and empirical analyses. I found the motivation strong, particularly in demonstrating that static graph structures are suboptimal for generative processes where noise plays a fundamental role. The experiments are thorough, covering diverse domains such as 3D point clouds, spatiotemporal transcriptomics, and image generation, which strengthens the paper’s claims regarding the generality of the approach. One of the key strengths of the paper is its blend of theoretical insight and practical implementation. The information-theoretic analysis linking noise levels to optimal message-passing radius is a valuable contribution, providing a principled foundation for the proposed method. Additionally, the empirical analysis supports these theoretical findings in a convincing way, particularly through attention weight visualizations and coarse-graining experiments. The modification of an existing state-of-the-art model (DiT) to incorporate NCGNs with minimal changes is another highlight, showcasing the practical viability of the approach. However, I do have some concerns. The theoretical results, while interesting, rely on certain simplifying assumptions which I am not sure are in conflict with real-world data. Moreover, though the empirical validation mitigates this to some extent, it would be useful to discuss potential limitations or failure cases in scenarios with highly structured noise or irregular graph connectivity patterns. Additionally, while the authors compare different adaptive scheduling strategies for DMP, the choice of an exponential schedule is somewhat heuristic. A more systematic exploration, perhaps with learned schedules or adaptive mechanisms, could further strengthen the claims. From a clarity perspective, the paper is generally well-written, but certain sections, particularly the theoretical derivations, could be made more accessible. The notation is sometimes dense, which may make it difficult for readers unfamiliar with flow-based generative models to follow. More intuitive explanations or visualizations of key mathematical insights could improve readability. Overall, this is a strong and original contribution with clear significance for the field of geometric generative modeling. While there are some areas for refinement, particularly in the theoretical assumptions and clarity of exposition, the paper convincingly demonstrates that noise-adaptive graph structures offer a meaningful improvement over existing approaches. I actually really like the idea. I think if you add some more ablations and explain the limitations of the work better, it is a very solid piece of work. Other Comments Or Suggestions: No Questions For Authors: See sections above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time, effort, and constructive feedback. We address the reviewer’s concerns and questions below: > Essential References Not Discussed Thank you for pointing out these references. We will ensure these works are discussed in the related works of the camera-ready version: **Adaptive GNNs (Li et al., 2018; Shirzad et al., 2022)**: While these works and our paper both adapt the graph structure, they address fundamentally different objectives—discriminative tasks or evaluation metrics rather than generative processes. Their adaptation mechanisms also differ from our noise-conditioned approach, as they either learn task-specific static structures or improve evaluation methods without dynamically modifying architectures based on noise levels. **Noise-aware graph learning (e.g. Luo et al., 2021; Zhang et al., 2020)**: While Luo et al. (2021) redraws edges at different noise levels, they use a *static* receptive field and resolution throughout the process (same fixed radius at all noise levels, as shown in Figure 3 of their paper), contrasting with our *dynamic* approach that systematically varies both factors. Zhang et al. (2020) focus on subsampling with full attention rather than noise-conditioning. **Hierarchical graph pooling (e.g. Ying et al., 2018; Defferrard et al., 2019)**: There are similarities between these approaches and our work as they all aim to have a multiscale view of the graph. However, these methods employ fixed hierarchical structures for classification/regression tasks, while our approach continuously adapts the hierarchical representation based on the noise level of the generative model. Further, the pooling techniques in these works could complement NCGNs as coarse-graining procedures. **Multi-resolution diffusion models (e.g. Zhao et al., 2023)**: We share a common insight with Zhao et al. (2023) and other works like cascading diffusion models: it is beneficial to have coarser representations at high noise regimes and fine-grain representations at low noise regimes. However, these works operate with discrete, manually-determined resolution stages requiring separate models for each resolution, while NCGNs provide a single continuous model with automatically adapted resolution *and connectivity*. > The theoretical analysis assumes isotropic Gaussian noise and continuous graphs, which may not hold in all real-world settings. More validation on structured or irregular noise would strengthen this claim. We acknowledge the limitations in our theoretical assumptions made in Section 3.1, but would like to clarify two key points: (1) **Isotropic Gaussian noise is the standard choice** in modern flow-based generative models, including diffusion models [1], flow-matching [2], and other variants [3,4]. This makes the isotropic Gaussian noise model assumption directly applicable to most real-world generative modeling settings. (2) The continuous graph assumption is indeed a simplification but is necessary to make the theoretical analysis tractable; to address the concerns, we specifically designed our empirical analysis in Section 3.2 to validate that our theoretical insights hold even when these assumptions are relaxed. [1] Denoising Diffusion Probabilistic Models. NeurIPS, 2020. [2] Flow Matching for Generative Modeling. ICLR, 2023. [3] Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow. ICLR, 2023. [4] Action Matching: Learning Stochastic Dynamics from Samples. ICML, 2023. > The choice of an exponential scheduling function is somewhat heuristic, and a deeper study of learned schedules could provide stronger justification. **Implementing learnable schedules is difficult** because the schedule is used for kNN graph construction and coarse-graining procedures, which involve non-differentiable operations (sorting, clustering, discrete assignments) that are not amenable to standard backpropagation. Instead of relying purely on heuristics, we conducted ablation studies in Table 2 and Section 5.1.1 comparing four scheduling functions (linear, exponential, logarithmic, ReLU), finding that all significantly outperform fixed baselines, with exponential consistently performing best. While developing differentiable approximations for these operations represents an exciting future direction, we believe it warrants dedicated study beyond our current paper's scope. > certain sections, particularly the theoretical derivations, could be made more accessible. The notation is sometimes dense, which may make it difficult for readers unfamiliar with flow-based generative models to follow. We agree that some theoretical sections and notation could be made more accessible. Are there any specific sections, derivations, or notations you think could be improved? This would help us prioritize changes for the camera-ready version. --- Thank you again for your helpful feedback. We welcome any additional discussions.
Summary: This work propose to change the architecture of the backbone model according to the noise level in flow match models. It shows that the reception field should be expanded and the resolution should be coarsen in high noise level. Based on this insight, this work proposes DMP, which consistently outperforms noise-independent architectures on 3D point clouds, spatio-temporal data and images. Claims And Evidence: Authors somehow overclaims their contribution. In abstract, this works claims to change GNN architecture according to noise level, however, only resolution and reception fields are changed, while other architecture designs are not discussed. Methods And Evaluation Criteria: This work includes various tasks. However, the baseline only includes the most vanilla models in the field, and the SOTA methods are not compared. Moreover, resolution/reception field in the center point in this work. However, graph transformers and GNNs with virtual node are known to capture global information. This work includes DiT in image task, but these global GNN baselines are still important for other tasks. Theoretical Claims: Yes, I checked Section 3. Experimental Designs Or Analyses: Yes, I checked section 5.1. Supplementary Material: I checked Appendix A for proofs. Relation To Broader Scientific Literature: This work improves flow-based generative models. But its idea is novel to me. Essential References Not Discussed: In section 3.1, this work cited no work on geometric generative models. Other Strengths And Weaknesses: Clear theory and straightforward intuition on the relation between resolution and noise level. Other Comments Or Suggestions: Table's caption should be placed above the table. Questions For Authors: Will the change of architecture and graph during generation leads to significant computation overhead? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time, effort, and constructive feedback. We address the reviewer’s concerns and questions below: > In section 3.1, this work cited no work on geometric generative models Our paper references key geometric generative modeling works in the introduction (e.g. [1-5]), but we agree that these references should be more explicitly discussed in Section 2.1 as well. We will ensure these important works are properly cited and discussed in Section 2.1 of the camera-ready version. Please let us know if there are relevant references we missed. [1] An autoregressive generative model of 3d meshes. ICML, 2020. [2] Pointflow: 3d point cloud generation with continuous normalizing flows. ICCV, 2019. [3] Highly accurate protein structure prediction with alphafold. Nature, 2021. [4] Diffdock: Diffusion steps, twists, and turns for molecular docking. ICLR, 2023. [5] Spatiotemporal modeling of molecular holograms. Cell, 2024. > Authors somehow overclaims their contribution. In abstract, this works claims to change GNN architecture according to noise level, however, only resolution and reception fields are changed, while other architecture designs are not discussed. Our original claim is for **the general framework of NCGN** since it is the first to formalize conditioning the GNN architecture on the noise-level of the generative process. **DMP is a specific implementation** of NCGNs that provides a practical model that can be used out-of-the-box with current flow-based generative models, as shown in Section 5.3. And while it is true that many other architectural components of GNNs could be conditioned on the noise level, Section 3 of our paper theoretically and empirically suggests that resolution and reception are two critical changes that impact expressivity, which is why we focus on these in our paper. Thus, our work serves as a first step in the potentially many implementations of NCGNs to improve performance of geometric generative models. As noted in the conclusion, NCGNs could adapt other architectural components (layer count, width, message passing type) in future work. We will clarify this distinction more explicitly in the camera-ready version to prevent any perception of overclaiming. > Baseline only includes the most vanilla models in the field, and the SOTA methods are not compared. Moreover, resolution/reception field in the center point in this work. However, graph transformers and GNNs with virtual node are known to capture global information. **DMP complements rather than replaces SOTA methods**. Our primary contribution is the noise-conditioning framework that can enhance existing architectures. In fact, DMP already incorporates concepts similar to what the reviewer suggests: 1. **DMP utilizes virtual nodes by design** - Our coarse-grained "supernodes" in Section 4.1 function similarly to virtual nodes, but with a critical difference: they dynamically adapt their number and connectivity based on noise level. At high noise, we use fewer supernodes with wider receptive fields; at low noise, we use more supernodes with localized connectivity. 2. **Our baselines effectively generalize many SOTA approaches** - Our fully-connected GAT baseline is effectively a graph transformer, the long-short range baselines capture similar connectivity patterns as architectures with static virtual nodes, and Section 5.3 demonstrates that incorporate noise-conditioning to a SOTA transformer-based model results in significant performance gains. > Will the change of architecture and graph during generation leads to significant computation overhead? Detailed in Section 4.1 under “Linear Time Complexity,” DMP maintains **linear-time message passing** throughout the generative process by ensuring that the product of connectivity ($r_t$) and resolution ($s_t$) remains constant: $r_ts_t = r_1N$. This design enables DMP to take "the best of both worlds" of having the expressivity benefits of more complex (e.g. fully connected) architectures while having the time complexity of sparsely connected methods. --- Thank you again for your helpful feedback. We welcome any additional discussions.
null
null
null
null
null
null
null
null
Mixture of Experts Provably Detect and Learn the Latent Cluster Structure in Gradient-Based Learning
Accept (poster)
Summary: The authors consider the problem of learning a mixture of $C$ tasks, plus one global task, with a mixture of two-layer network experts. They first establish that a single network is unable to recover the global signal, by constructing a special instance of this class of targets. They then turn to analyze the mixture of experts, trained with a stagewise algorithms in which different set of parameters are sequentially trained, and prove that this model is in contrast able to . A fine analysis of features learned at each stage is reached, with notably the first layer neurons specializing to different subtasks, and the routing weights then learning to point towards relevant experts, with the global task finally being learned. Claims And Evidence: The paper is primarily theoretical in nature. The authors bolster and illustrate their theoretical insights -- namely the specialization of experts and routing weights at different stages-- in convincing numerical experiments in Figs. 1,2 . Methods And Evaluation Criteria: The paper is primarily theoretical in nature. Theoretical Claims: I have not checked the proofs in detail. From my reading, the claims seem sound. Experimental Designs Or Analyses: The experiments presented in Fig. 1, 2 seem sound. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The work is similar in spirit to the prior key work of (Chen et al., 2022) in the investigation of feature learning by mixtures of experts, although the task and setting differ. To the best of my understanding, the novelty of the present work lies in the fact that the complexity of the target function can be understood through its effective information exponent, thus connecting to a rich literature on this topic in the simpler setting of learning single or multi-index models with single networks (e.g. Arous et al., 2021). This allows for insightful explanations of the inability of single experts to learn the function, as discussed in 4.2.1. Essential References Not Discussed: To the best of my knowledge of this line of works, I am not aware of an essential reference the authors fail to cite. Other Strengths And Weaknesses: I am overall in favor of acceptance. The paper establishes a clear theoretical result for an interesting model, and connects to the body of works on information exponents, which I believe paves interesting bridges for future works. The theoretical results are clearly exposed, and sufficient discussion is provided whenever needed to build intuition (e.g. in subsection 3.3). I did not however carefully check the proofs, nor do I have the expertise to provide an educated reading of the latter. Other Comments Or Suggestions: I do not have major comments or suggestions. Questions For Authors: 1. (Clarification question) Are there any conditions on the respective scaling of $M$ with $C$ for Theorem 4.7 to be applicable ? Is it the condition of Lemma 4.9? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Reviewer wqTg, We sincerely thank the reviewer for their thoughtful evaluation and insightful questions. ## Q.1 As the reviewer correctly noted, the conditions for the theorem rely on the inequality $M \gtrsim C \log (C / 0.001)$ in Lemma 4.9. In connection with this, while we initially described $M$ and $C$ as polylogarithmic in the input dimension $d$, we have since realized that ensuring reliable initialization requires both to be $O(1)$. We will revise the final version accordingly. This refinement is made for theoretical clarity and does not affect the main conclusions or practical relevance of our work. Aside from this adjustment, no other major changes to assumptions or results are necessary. As discussed in Oko et al. (2024a) in the context of feature learning, the Hermite coefficients of the student model can be perturbed by random initialization, a factor not considered in prior work on MoE classification (Chen et al., 2022). Our analysis builds on this observation to study robust specialization under such randomness. However, if the number of clusters $C$ is too large, the number of expert-to-router combinations grows rapidly, making the specialization problem significantly more challenging. We stress that this assumption is mainly to ensure **theoretical reliability**. In practice, MoE models often incorporate techniques such as auxiliary diversity losses in routing to promote **expert diversity** and avoid **premature collapse** (Cai et al., 2024). Our assumption reflects these practical strategies, and the insights into learning dynamics and specialization in MoE remain valid and relevant. ## Implications from our theoretical results While our study is theoretical in nature, it also carries practical implications. These implications are discussed in the **"Implications"** part of our response to Reviewer **YoCg**, which we kindly invite the reviewer to refer to. Due to character limits, we respectfully make use of this space to provide the references. ### References * Allen-Zhu & Li. (2023), Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning, ICLR. * Arous et al. (2021), Online stochastic gradient descent on non-convex losses from high-dimensional inference, JMLR. * Bietti et al. (2022), Learning single-index models with shallow neural networks, NIPS. * Cai et al. (2024). A survey on mixture of experts. arXiv preprint arXiv:2407.06204. Chen et al. (2022), Towards understanding the mixture-of-experts layer in deep learning, NIPS. * Chowdhury et al. (2023), Patch-level routing in mixture-of-experts is provably sample-efficient for convolutional neural networks, ICML. * Damian et al. (2022), Neural networks can learn representations with gradient descent, COLT. * Dayi & Chen. (2024), Gradient dynamics for low-rank fine-tuning beyond kernels, arXiv preprint arXiv:2411.15385. * Dudeja et al. (2018), Learning Single-Index Models in Gaussian Space, PMLR. * Ge et al. (2018), Learning One-hidden-layer Neural Networks with Landscape Design, ICLR. * Huang et al. (2024), Harder Tasks Need More Experts: Dynamic Routing in MoE Models, ACL. * Komatsuzaki et al. (2023), Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints, ICLR. * Li et al. (2025), Theory on Mixture-of-Experts in Continual Learning, ICLR. * Liu et al. (2024), GRIN: GRadient-INformed MoE, arXiv preprint arXiv:2409.12136, 2024. * Mousavi-Hosseini et al. (2023), Gradient-Based Feature Learning under Structured Data, NIPS. * Oko et al. (2024a), Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations, COLT. * Oko et al. (2024b), Pretrained transformer efficiently learns low-dimensional target functions in-context, NIPS. * Pan et al. (2024), Dense Training, Sparse Inference: Rethinking Training of Mixture-of-Experts Language Models, arXiv preprint arXiv:2404.05567. * Roller et al. (2021), Hash Layers For Large Sparse Models, NIPS. * Shazeer et al. (2017), Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer, ICLR. * Simsek et al. (2025), Learning Gaussian Multi-Index Models with Gradient Flow: Time Complexity and Directional Convergence, CPAL. * Tang et al. (2025), Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model, ICLR. * Vural & Erdogdu. (2024), Pruning is Optimal for Learning Sparse Features in High-Dimensions, COLT. * Wei et al. (2024), Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models, arXiv preprint arXiv:2406.06563. * Zeng et al. (2024), AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models, ACL. * Zhang et al. (2024), Diversifying the Expert Knowledge for Task-Agnostic Pruning in Sparse Mixture-of-Experts, arXiv preprint arXiv:2407.09590. * Zhou et al. (2022), Mixture-of-Experts with Expert Choice Routing, NIPS. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed clarifications. Regarding the revision of $M,C=O(1)$. If possible, could the authors provide clarifications on the following points: - Does it change any of the statements of the technical results of the paper, or do they all carry through? > As discussed in Oko et al. (2024a) in the context of feature learning, the Hermite coefficients of the student model can be perturbed by random initialization - Could the authors develop more on this point, or refer me to the relevant part of (Oko et al, 2024a) / their proofs? --- Reply to Comment 1.1.1: Comment: ## Reviewer wqTg, We greatly appreciate the reviewer’s thoughtful follow-up questions. Regarding the first point, the main results of the paper **remain unchanged and carry through**. However, we would like to revise the assumptions here to ensure the reliable initialization in Phase I below, which also addresses the second part of the question. In the original analysis, we did not fully account for the combinatorial growth in the number of expert-neuron and cluster assignments as $M$, $J$, and $C$ increase, which leads to a failure probability in initialization that becomes unacceptably large. Additionally, our original proof required collecting $O(\epsilon^{−1})$ specialized neurons per expert during initialization. In the revised version, we instead target **the specialization of a single neuron per expert** in Phase I, which simplifies the initialization argument and improves reliability under high-dimensional scaling. We reformulate the assumption as $M ,C = O(1)$, and $J$ satisfies $J \lesssim \sqrt{\log d}$. In addition, we introduce a mild technical assumption on the teacher model: the minimum order non-zero Hermite coefficients have approximately equal magnitudes across clusters. This constraint on $J$ in turn imposes a condition on the estimation error $\epsilon$, requiring $\epsilon^{-1}\lesssim \sqrt{\log d}$. Similar assumptions, in which $\epsilon\to 0$ as $d\to \infty$, appear in Definition 4 of Abbe et al. (2022), Corollary 9 of Abbe et al. (2023), and Theorem 1 of Braun et al. (2025). The revised argument in Phase I resembles the approach of Chen et al. (2022), rather than that of Oko et al. (2024a). Specifically, we use the early part of the proof of Lemma 2 of Oko et al. (2024a) to construct i.i.d. Gaussian variables, and then reduce the argument to the initialization framework of Chen et al. (2022). In this phase, weak recovery leads to the specialization of a single neuron per expert. Following the perspective of Oko et al. (2024a), we analyze this process through the Hermite expansion, which motivates the additional assumptions. The reason is as follows: To ensure robustness, we derive a condition for the specialization of a neuron during weak recovery based on the methods introduced in Chen et al. (2022), while accounting for interactions among neurons. As in Lemma 2 of Oko et al. (2024a), we must ensure that the margin $(\delta)^{(p-2)/2}$, where $\delta$ is defined in their lemma, is preserved. This quantity provides robustness under interactions and should remain greater than $(\log d)^{-1}$. This requires $M J^2 C^2\lesssim \log d$, which is satisfied with $M,C=O(1)$ and $J\lesssim \sqrt{\log d}$. As shown in Corollary 15 of Oko et al. (2024a), it is required that the factor $(C_p)^{(p-2)/2}$ satisfy $$ (C_p)^{(p-2)/2} = 1 + O\left(\frac{1}{M J^2 C^2}\right) = 1 + O\left((\log d)^{-1}\right). $$ Note that Lemma D.4 and Lemma F.1 in Chen et al. (2022) provide relevant discussion on this point. According to Corollary 15 of Oko et al. (2024a), $(C_p)^{(p-2)/2}$ corresponds to the ratio of absolute values of the lowest-order Hermite coefficients. Hence, the assumption that these coefficients are within $1 + O((\log d)^{-1})$ of each other is necessary. Furthermore, since the effective signal-to-noise ratio available to the router decreases with increasing network width $J$, we require $\epsilon^{-1}$ and $J \leq \sqrt{\log d}$ to ensure that the signal can be reliably detected during initialization. We will clarify these technical points and include more explicit references to the relevant parts of Chen et al. (2022) and Oko et al. (2024a) in the final version. ### Additional References * Abbe et al. (2022), The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks, COLT. * Abbe et al. (2023), SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics. COLT. * Braun et al. (2025), Learning a single index model from anisotropic data with vanilla stochastic gradient descent, arXiv preprint. arxiv:2503.23642.
Summary: This paper theoretically studies the learning dynamics of Mixture-of-Experts (MoE) models in nonlinear regression tasks with an underlying cluster structure. The main contribution of the paper is listed below - Proves that a standard neural network fails to detect and exploit the latent cluster structure, while MoE successfully separates and learns simpler subproblems. - Analyzes the SGD training dynamics of MoE, showing that experts weakly recover cluster-specific functions, and the router learns to assign data correctly. - Establishes polynomial time and sample complexity, demonstrating MoE’s efficiency in learning clustered tasks. - Proposes and theoretically validates a multi-phase training algorithm for MoE, alternating between expert learning and router learning to optimize specialization. Claims And Evidence: The paper provides strong theoretical proofs under specific assumptions like population gradient flow and orthogonal features. However, these conditions may be too restrictive for real-world scenarios, and the claimed failure of vanilla networks is shown under specific setups, so it may not be generalized. Methods And Evaluation Criteria: The paper’s chosen regression setup (with latent clusters and a shared global component) generally aligns with its theoretical goals, making the methods and evaluation criteria reasonable for investigating MoE’s advantages in that specific setting. However, the multi-phase training procedure is not particularly relevant to state-of-the-art deep learning applications, though it provides valuable theoretical insights. Moreover, the authors considered two routing mechanisms: top-1 routing and threshold-based routing, the latter of which is not a standard approach. These choices were likely made to simplify the theory, but they resulted in an oversimplification. Theoretical Claims: I briefly reviewed the proofs, and the theoretical results appear overly complex and dense, making them harder to follow than necessary. Experimental Designs Or Analyses: The experiments are limited in scope but provide valuable insights. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper takes a step toward understanding MoE training. While the theoretical scope is overly simplified, it serves as a solid starting point. Essential References Not Discussed: The authors could add additional Other Strengths And Weaknesses: - The application of theoretical ideas from feature learning to MoE is novel and interesting to me. - The presentation of the theoretical results and problem setup is unnecessarily complex. I strongly encourage the authors to refine the notation for better clarity and readability. Other Comments Or Suggestions: N/A Questions For Authors: 1. How do the theoretical results change with different routing mechanisms? 2. How might your analysis extend to scenarios where the cluster signals and global signal are only partially orthogonal or have overlapping directions, and do you expect the router to still reliably separate clusters under weaker assumptions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Reviewer bEDR, We sincerely thank the reviewer for the thoughtful and constructive feedback. We will incorporate the suggestions into the final version. Below, we address the reviewer’s comments. For references, please see our response to Reviewer **wqTg** due to character limits. ## Multi-phase training We agree that the multi-phase training procedure deviates from practical applications. However, multi-stage optimization is often employed to reveal the intrinsic nature of complex optimization problems and is commonly used in studies of feature learning (Arous et al., 2021; Ge et al., 2018; Gudeja et al., 2018; Bietti et al., 2022; Damian et al., 2022; Mousavi-Hosseini et al., 2024;Oko et al., 2024ab). Joint training of the router and experts is promising, but in nonlinear regression, it poses intractable non-convex challenges and is left for future work. ## Assumptions and practical relevance Since obtaining negative results for vanilla neural networks is mathematically challenging, we followed the approach of using gradient flow, as in Mousavi-Hosseini et al. (2024) and Simsek et al. (2025). In addition, while Chen et al. (2022) assume all signals are orthogonal, our analysis only requires orthogonality to the mean vector. Our theoretical results remain rich in implications. Please refer to the **"Implications"** section of our response to Reviewer **YoCg**. ## Router We would like to clarify that our theoretical routing mechanism is not an oversimplification, but rather a **principled approach** to addressing fundamental technical challenges in MoE without practical heuristics. In Phase II, training is performed using top-1 routing with added random noise $r_m$. After this phase, **a data $x_c$ is no longer routed to experts $m' \notin \mathcal{M}_c$** (where $\mathcal{M}\_c$ is the set of professional experts) due to the negative correlation between its router weight $\theta_{m'}$ and the cluster center vector $v_c$.(Lemma 4.11 and Lemma C.16). However, it remains possible that **competition among experts $m \in \mathcal{M}_c$** leads to load imbalance if we employ top-1 routing. This phenomenon differs from the classification setting in Chen et al. (2022) and Chowdhury et al. (2023), and stems from the need to estimate a continuous function in the regression setting. We analytically determine the cause and employ an **adaptive top-$k$** routing in Phases III and IV, where an expert $m \in \mathcal{M}_c$ is activated if it satisfies a sufficient condition $h_m(x_c) \geq 0$, ensuring that it covers the data distribution of cluster $c$. The value of $k$ is **adaptively** determined based on the data, a strategy that is also adopted in practice (Huang et al., 2024; Zeng et al., 2024). Importantly, while the adaptive top-$k$ arises from a theoretical and technical challenge, practical implementations often incorporate auxiliary losses or dense training to address load imbalance (see lines 126–131 in the right column), ensuring consistency between theory and practice. ## Q.1 * **Expert choice routing (ECR)**: Since ECR (Zhou et al., 2022) uses the same gating network as our token choice routing (TCR) (Shazeer et al., 2017), both methods identify clusters through differences in expert recovery reflected in the gradients. The key difference is that TCR selects top-k experts per token, while ECR selects top-k tokens per expert, potentially dropping tokens unselected by any expert. We believe similar theoretical results are attainable if token drop is mitigated, e.g., via expert capacity constraints. * **Dense MoE**: Dense training (Pan et al., 2024) activates all experts without applying top-k selection to the softmax outputs. Since all experts are activated, the gradient for expert $m$ depends on a comparison with a weighted average over all experts, potentially causing gradient competition. Specifically, when weak recovery occurs across experts $m \in \mathcal{M}_c$, differences in gradient magnitudes can distort the update direction and complicate theoretical analysis. Therefore, similar guarantees in the dense setting demand extra assumptions or modified algorithms. * **Hash-based Routing**: Since hash-based routing (Roller et al., 2021) is non-learning, misaligned partitioning of the cluster structure may induce gradient interference and hinder theoretical guarantees. ## Q.2 If all signals ($w^*_i$, $v_j$) have overlaps of order $\tilde{O}(d^{-1/2})$, then by carefully bounding the resulting overlap terms, we believe the MoE can still reliably learn the underlying cluster structure. Specifically, the gating network's gradient includes additional terms of magnitude $\tilde{O}(d^{-1/2})$ relative to the main signal, arising across task–neuron combinations. In contrast, if the overlap is of order $\tilde{\Omega}(1)$, the dynamics become profoundly more complex. Thus, we are unable to provide a definitive conclusion in this case. We are happy to clarify any remaining questions during the discussion. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the thoughtful rebuttal and clarifications. I appreciate the effort you made to address the concerns raised. I will be maintaining my weak accept score. While I believe the theoretical setup considered in the paper is somewhat oversimplified, it provides a useful and focused starting point for understanding the training dynamics of MoE models.
Summary: This paper presents a comprehensive analysis of the sample and computational complexity of Mixture of Experts (MoEs) when optimized using stochastic gradient descent (SGD) for a regression problem. Claims And Evidence: Based on the data model proposed in Assumption 3.2, all the claims appear highly likely to be correct. Methods And Evaluation Criteria: N/A Theoretical Claims: The Appendix lacks a clear structure and primarily consists of a dense collection of theoretical results without sufficient explanatory text. As a result, I find it difficult to follow most of the proofs. Experimental Designs Or Analyses: Good. Supplementary Material: I reviewed the supplementary materials but was unable to grasp any details. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: 1. A numerical justification using real data is needed to support Assumption 3.2. 2. The theoretical results and experiments should be presented separately, or at least, a high-level summary of the theoretical findings should be provided. Currently, it is difficult to discern the major implications of your theoretical results. Additionally, it would be beneficial to highlight key differences from existing works, such as Theorem 4.3. 3. Real-data numerical experiments are necessary to further validate the theoretical findings. ---------------------------- Post-rebuttal comments: I will keep my original score (3). I appreciate the new theoretical insights introduced in this paper; however, these insights need to be further validated through experiments on real data. Without such empirical justification, the impact remains limited. Questions For Authors: Any thoughts on improving the structure or training process of MoE from your theoretical results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # Reviewer VrRr, We sincerely thank the reviewer's insightful and constructive comments. In the final version, we will separate the presentation of theoretical results and experiments, and further elaborate on the writing in the supplementary material. For the theoretical results, we will include a high-level summary, detail implications, and highlight key differences from prior works. We clarify each point and question below. Owing to the character limits, we respectfully refer the reviewer to our author response to Reviewer **wqTg** for the references. ## Implication Please refer to the **"Implications"** part in our response to Reviewer **YoCg**. ## Key differences from existing works Prior MoE optimization studies (Chen et al., 2022; Chowdhury et al., 2023) focus on binary classification with noisy features. In contrast, we analyze regression and highlight how **gradient interference** (see lines 125-127 in the left column) across clusters weakens the signal and effectively increases the **information exponent**, revealing a clear separation between vanilla neural networks and MoE in Theorem 4.3. Although Li et al. (2025) also tackle regression, their linear setting does not account for the **non-convex** exploration phase we uncover. Regarding information exponents in feature learning, Oko et al. (2024a) assume additive task structure within single data points, whereas we study **additive structure across clusters**. Theorem 4.7 is the first to apply nonlinear sample complexity analysis to MoE, showing how the router’s gradients reflect expert specialization via **weak recovery** differences. ## Real-data experiments We certainly appreciate the importance of experiments on real-world data, as the reviewer has rightly pointed out. However, our study is theoretical in nature. Real-world datasets inevitably contain noise, which makes it challenging to isolate specific phenomena in a controlled and interpretable manner. Therefore, we conducted controlled experiments using synthetic data. Within the line of work on the complexity of learning low-dimensional nonlinear functions in specific architectures, Dayi & Chen (2024) and Oko et al. (2024b) also rely on synthetic experiments based on Gaussian data, while Vural & Erdogdu (2024) do not provide empirical results. In the final revision, we will include the more detailed learning process of MoE based on synthetic experiments. ## Q. ### Structure of MoE * **Gradient-aware routing:** While our analysis shows that the MoE mitigates gradient interference by explicitly partitioning the network, the number of experts is typically selected heuristically in practice. This raises the possibility that integrating gradient-aware routing mechanisms could lead to more efficient and adaptive MoE architectures. Recent works also explore the role of gradient interference in MoE (Liu et al., 2024; Tang et al., 2025). * **Adaptive routing:** To prevent competition among professional experts ($\in \mathcal{M}_c$), we adopted top-$k$ routing to avoid potential load imbalance. Thus, dynamically adjusting the number of activated experts $k$ is expected to stabilize training. This insight stems from our theoretical studies in nonlinear regression and aligns with recent NLP research (Huang et al., 2024; Zeng et al., 2024), where $k$ is adaptively varied based on tokens. * **Freezing (pruning) redundant experts:** To mitigate competition among professional experts, reducing redundant experts is also worth exploring, potentially lowering deployment cost. Recent work has investigated expert merging in this context (Zhang et al., 2024). ### Training process of MoE * **Upcycling from dense checkpoints:** We argued that meaningful router learning requires **differences** in weak recovery among experts, which in turn necessitates a **long exploration stage** due to the non-convex optimization. In this light, upcycling dense MLP checkpoints trained on distinct domains may offer a practical approach to accelerate convergence. This idea has seen several empirical successes in recent LLMs (Komatsuzaki et al., 2023; Wei et al., 2024). * **Stage-specific routing:** The noise $r_m$ introduced during Phase II ensures uniform gradient flow and sufficient signal for all experts. In contrast, the adaptive top-$k$ routing in Phase III and IV addresses competition among professional experts. This distinction—different challenges occur at each learning stage—suggests that stage-specific routing strategies may be helpful for effective MoE training. We welcome any further questions or concerns that may arise and would be pleased to provide clarification.
Summary: In this paper, the authors investigate the sample and runtime complexity of Mixture-of-Experts (MoE) optimized with the stochastic gradient descent when learning a regression task with an underlying cluster structure of single index models. In particular, they show that a single neural network cannot detect a latent cluster structure. On the other hand, MoE is capable of doing that as it aggregates the ability of each expert to weakly recover the simpler function associated with an individual cluster. All the results are proved rigorously. Finally, they conduct some numerical experiments to justify the theory. Claims And Evidence: Yes, the results are supported by both rigorous proofs and empirical evidences. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not check the theoretical proofs carefully. However, I read the proof sketches and the arguments made sense to me. Experimental Designs Or Analyses: Yes, I checked the experimental setup on page 6. The experiments are for justifying the theory only. Supplementary Material: I just had a quick look at the supplementary material. It is good to include Appendix A where the authors provide background on some specific terms. Relation To Broader Scientific Literature: The theory in the paper is to understand the underlying mechanism of Mixture-of-Experts and how it works. Essential References Not Discussed: Most of the relevant references have been cited. However, I encourage the authors to cite at least one reference for the teacher models in Assumption 3.2. If this model is first introduced by this paper, then the authors should elaborate on the formulation of the data generation process. Additionally, in lines 096-098 in the right column, the authors should add references for the claim "The complexity of learning a nonlinear function via two-layer neural network optimized by SGD is closely associated with this value". Other Strengths And Weaknesses: **Strengths:** 1. The problem considered in this paper is of interest due to the recent success of Mixture-of-Experts in large-scale models. 2. This theoretical results are corroborated with rigorous proofs. **Weaknesses:** 1. The writing of Section 3.1 and Section 3.2 is not really good. In Section 3.1, the writing in Assumptions 3.2 should be improved. In particular, the authors should provide an intuitive explanation why they have to involve the Hermite polynomials and Hermite expansions when before or after introducing them. The assumptions on model parameters, e.g., $\omega^*_c$, $\omega^*_g$ are not explained thoroughly. Additionally, the roles of parameters $s_c$, $\nu$ are not introduced, making it inaccessible for people who do not have background on the considered problem. In Section 3.2, the introduction to the formulation of MoE is quite complicated as the authors discuss different gating functions and routing strategies alternately without clear separation, making it difficult to digest. 2. Although the paper considers a different problem than that in (Chen et al., 2022), the practical implications are quite similar, that is, an MoE is capable of performing something that a single expert cannot do. It would be more interesting if the theory can provide new insight into improving the MoE performance. 3. **Major issue:** The mixture-of-experts formulation considered in this paper is far from practice. In particular, in the formulation of $\hat{F}_M$ in line 119 in the right column, whether an expert $f_m$ is activated or not depends entirely on the positivity its respective gating value $h_m$ rather than the magnitude rank (TopK) of the gating value as in the literature (Shazeer et al., 2017). This point makes me confused about the relevance of the theory in this paper. **References** Chen, Z., Deng, Y., Wu, Y., Gu, Q., and Li, Y. Towards understanding the mixture-of-experts layer in deep learning. In Advances in Neural Information Processing Systems, volume 35, pp. 23049–23062. Curran Associates, Inc., 2022. Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., and Dean, J. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In International Conference on Learning Representations, 2017. Other Comments Or Suggestions: 1. In the Assumption 3.2, I think $v_c$ should be $v^*_c$. Please feel free to correct me if I am wrong. 2. In line 111 in the left column, should $g^*_j$ be $g^*$? Questions For Authors: 1. Why do the parameters $\omega^*_c,\omega^*_g,v^*c$ belong to the set $\mathbb{S}^{d-1}$ rather than $\mathbb{R}^d$? 2. How do you guarantee that $k$ experts will be activated given the router $1[h_m(x)\geq 0]$ in the formulation of $\hat{F}_m$? Based on the router formulation, there could be more or less than $k$ experts activated per input. 3. In high-level, what are the reasons for involving the Hermite expansion in this case? 4. Can the current results be generalized to the sparse MoE setting in (Shazeer at al., 2017)? If not, what are the main challenges? **References** Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., and Dean, J. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In International Conference on Learning Representations, 2017. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: # Reviewer YoCg, We sincerely appreciate the reviewer's thoughtful and constructive feedback. Below, we carefully address the raised concerns and questions. Due to character limits, please see our response to Reviewer **wqTg** for references. ## Model parameters Following the reviewer's precise feedback, we will revise the manuscript to provide clear explanation of $w^*_c$, $w^*_g$, $s_c$, and $\nu$. ## Router Please refer to the **"Router"** part in our response to Reviewer **bEDR**. ## Implications Chen et al. (2022) present a failure of a single expert in binary classification where noisy features confound the true label. In contrast, our work focuses on a **regression** setting and reveals a different failure mode where **gradient interference** across clusters attenuates the signal, increasing the information exponent and hindering optimization. Our findings highlight how MoE mitigates interference by explicitly partitioning the network. Such interference has been recognized as a major challenge in multi-task learning (see lines 125-127 in the left column). Furthermore, we demonstrate that, for the router to distinguish between experts, a **weak recovery** of the feature index is necessary. Our analysis reveals that this requires a sample complexity of $\tilde{\Theta}{(d^{k^*-1})}$ during the exploration stage, implying that a sufficiently **long exploration phase** is essential before the router can begin meaningful learning. This contrasts with the linear expert setting in Li et al. (2025), as our findings arise from non-convex optimization. Finally, unlike in classification (Chen et al., 2022; Chowdhury et al., 2023), in the regression setting, **competition among experts** within the set of professional experts $\mathcal{M}_c$ may lead to load imbalance after the router has learned to assign inputs. This observation suggests a practical motivation for several heuristic approaches (see lines 122–131 in the right column). ## References According to the reviewer’s suggestion, we will incorporate references Chen et al. (2022) and Allen-Zhu & Li (2023) as inspiration for our data structure, and Liu et al. (2024) and Tang et al. (2025) as practical works on gradient interference in MoE in Assumption 3.2. We will also include Arous et al. (2021), Ge et al. (2018), Gudeja et al. (2018), Bietti et al. (2022), Damian et al. (2022), and Oko et al. (2024a) in lines 096–098 in the right column. ## $v_c$ and $g_j$ We appreciate the reviewer pointing that out. The notation $v_c$ is correct, but $g_j$ is a typo. We will correct the typo in the final version. ## Q1. In our setting, feature vectors are sampled from $\mathbb{S}^{d-1}$ rather than from $\mathbb{R}^d$, in order to normalize signal strength for the analysis of simultaneous learning of multiple signals. This setting is frequently employed in prior works (e.g., Arous et al., 2021; Dudeja et al., 2018; Bietti et al., 2022; Damian et al., 2022; Oko et al., 2024ab). We would also like to note, in high-dimension, a uniform vector on $\mathbb{S}^{d-1}$ converges in distribution to $\mathcal{N}(0,d^{-1}I_d)$, in each coordinate, by concentration of measure. ## Q2. As the reviewer correctly pointed out, in $\hat{F}_m$, the number of activated experts is not fixed and $k$ is a stochastic variable. There have indeed been prior attempts to vary the number of activated experts depending on each token (Huang et al., 2024; Zeng et al., 2024). ## Q3. When analyzing the gradient, the correlation between the **nonlinear** target and the **nonlinear** activation arises. To decompose this correlation, we employ the Hermite expansion, which forms an **orthogonal** basis in the $L^2$ space under the Gaussian measure, as first used in Ge et al. (2018) and Dudeja et al. (2018). Specifically, we have $$ \mathbb{E}\_{z \sim \mathcal{N}(0, I_d)} [ f^*\_c({w^*\_c }^\top z) a_{m,j} \sigma_m({w_{m,j}}^\top z + b_{m,j}) ] = \sum_{i = k^*}^{p^*} \alpha_{m,j,i} \beta_{c,i} {\langle w_{m,j}, w^*_c \rangle}^i. $$ This expresses the interaction as an expansion in powers of **alignment**. Crucially, the first non-zero order $k^*$ in the expansion (i.e., information exponent) **governs** the signal strength. ## Q4. In our study, the top-1 routing used in Phase II constitutes a sparse MoE. Although the adaptive top-$k$ routing employed in Phases III and IV allows $k$ to vary, it remains a sparse MoE in the sense that not all experts are activated. While adaptive top-$k$ is motivated by technical challenges in the theoretical analysis of nonlinear regression, we believe that end-to-end training with top-1 routing is feasible with the incorporation of auxiliary losses to control load balancing, which presents an interesting direction for future work. We will incorporate the revisions identified through this review into the final version. We would be happy to address any further questions during the discussion. --- Rebuttal Comment 1.1: Comment: Dear the Authors, Thank you for your response, I really appreciate it. After reading the rebuttal, I am still not convinced by your response to the Weakness #3 about the sparse router. Note that all the popular LLMs such as GPT and DeepSeek use sparse routers which determine activated experts based on the magnitude of their affinity scores rather than the positivity of the affinity scores. Furthermore, in the literature of MoE, the sparse routers with the stochastic number of activated experts as proposed in the paper have not been shown to have clear benefits over the traditional sparse router as in (Shazeer et al., 2017). For these reasons, I think that the theory of this paper is quite irrelevant to practice. Hence, I will keep the original rating. --- Reply to Comment 1.1.1: Comment: ## Reviewer YoCg, We sincerely thank the reviewer for their thoughtful response and for taking the time to review our paper. While we understand that the reviewer does not plan to raise their score, we would like to respectfully clarify a point for completeness and for the benefit of other reviewers and the area chair. Despite theoretical idealizations, we firmly believe that both the implications we have presented and the insights into the learning dynamics—specifically, how the router learns the underlying cluster structure via gradients of the gating network, which reflect signals related to the weak recovery of the experts—are novel and well supported. Accordingly, we believe our findings are practically relevant. While it is true that in practical LLMs, the top-k experts are often selected based on the largest values of ${\theta_m}^\top x_c$, adaptive top-$k$ routing based on ${\theta_m}^\top x_c \geq 0$ in Phase III and IV was introduced to maintain analytical tractability and to better handle competition among professional experts, as heuristic methods such as auxiliary losses for learning nonlinear regression with nonlinear experts (a direction largely avoided in prior works) would significantly complicate the mathematical analysis.
null
null
null
null
null
null
SlimLLM: Accurate Structured Pruning for Large Language Models
Accept (poster)
Summary: This work involves compressing LLMs by width pruning of sublayers in transformer blocks, where MHA and FFN are treated differently. For MHA, head pruning is performed based on the similarity between outputs with and without specific heads. For FFN, channel pruning is carried out using a Wanda-based metric from LoRAP, which additionally considers feature direction along with feature magnitude. Performance recovery is achieved through a linear regression method, and a non-uniform pruning ratio is applied across different blocks. The experiments are exclusively conducted on the LLaMA family. Claims And Evidence: This work is largely supported by empirical evidence, and the approach to pruning MHA and FFN differently seems interesting. Although additional hyperparameters are introduced, the type of performance recovery employed is novel to me. However, some aspects of FFN pruning remain unclear, and heuristics are used to determine non-uniform pruning ratios. The paper introduces one method for MHA pruning (A), another for FFN pruning (B), and a third for performance recovery (C). While (A) and (C) appear promising and interesting, the rationale and effectiveness of (A), (B), and (C) independently are not well justified. Additionally, several necessary ablation studies are missing. Methods And Evaluation Criteria: The proposed framework aligns well with previous studies on LLM pruning. Given the emphasis on LLM compression in this work, the use of evaluation metrics seems adequate. Theoretical Claims: While the paper does not make theoretical claims, it does provide clear descriptions of the equations used in the method. Experimental Designs Or Analyses: * Measuring the impact of per-head removal as shown in Equation (4) is intriguing, yet its effectiveness is still in question. Regarding MHA pruning, is it more effective than gradient-based importance measures like those used in LLM-Pruner, AWSVD in LoRAP, or the fluctuation metric in FLAP, when all other conditions are identical? Although Tables 1 and 2 offer comparisons with several methods, the inclusion of other factors, such as FFN pruning and non-uniform ratios, makes it challenging to clearly evaluate the effectiveness of the proposed metric for MHA. - [FLAP] https://arxiv.org/abs/2312.11983 * Furthermore, the computations in Algorithm 1 appear to be intensive due to its greedy search nature. How long do these computations typically take, and is it possible to provide comments on this aspect? * I think the pruning importance metric for the FFN is quite similar to LoRAP, except it employs a sigmoid function for the down projection. Why is the down projection designed in this particular way while the gate and up projection do not follow the same design? Is it impossible to apply Eqn (7) to the gate and up projection matrices, or is there another reason? * Linear regression for performance recovery seems effective; however, are A and B in Eqn (10) additional, newly introduced parameters? It appears that A and B are necessary for the inference process. If this is the case, it may not integrate well with other inference engines, such as vLLM and TensorRT-LLM, because it requires modifications to the forward code and involves saving additional parameters. This could limit the practicality of the work. * How did the authors set r_0​ and alpha in Eqn (11)? The statement, 'Consequently, we employ a stratified pruning strategy, wherein the pruning ratios are systematically lower in the shallower layers and incrementally higher in the deeper layers,' (line 241 of page 5) seems entirely heuristic and lacks a systematic foundation. Furthermore, the generalizability to other LLMs beyond the LLaMA family is not guaranteed. * For the results for Llama-2 in Table 2, I am not convinced that SlimLLM outperforms LoRAP. The numerical results, such as WikiText2 PPL and downstream accuracy, appear very similar, and in some cases, LoRAP seems to perform better. * The number and type of calibration samples have not been investigated. * The experimental validation in this study seems restricted since only models from the LLaMA family are used. To better ascertain the widespread applicability and superiority of this method, I suggest conducting experimental comparisons with additional models such as OPT, Qwen, Phi, and MoE-based architectures. * The tests were limited to models with only 7B parameters. Including models with at least 13B parameters, as done in previous studies, is essential to provide a comprehensive assessment. * The methodology used to calculate latency gains in Table 3 needs further elaboration, especially since width pruning often faces challenges in achieving actual speedups in setups like ZipLM and Shortened LLaMA. Could you clarify which framework was used (e.g., Vanilla HuggingFace, TensorRT-LLM, vLLM)? It's also important to know if the method can accelerate both the prefill and decoding stages, as the current results seem to focus solely on prefill with single-token generation. Is a batch size of 1 appropriate? Supplementary Material: I haven't checked the supplementary material. Relation To Broader Scientific Literature: Reducing computational requirements for LLMs is widely discussed, and this work tackles it in a clear and effective manner. Essential References Not Discussed: * The discussion lacks mention of certain pruning studies such as width pruning techniques like FLAP and Minitron, and depth pruning methods including Shortened LLaMA, SLEB, and Minitron. It seems essential to compare this work with FLAP, given its significant benefit of eliminating the need for retraining, and because analyzing the differences between original and pruned features is relevant. - [FLAP] https://arxiv.org/abs/2312.11983 - [Minitron] https://arxiv.org/abs/2408.11796 - [Shortened LLaMA] https://arxiv.org/abs/2402.02834 - [SLEB] https://arxiv.org/abs/2402.09025 Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: * I am uncertain about the relevance of the quantization and low-rank approximation methods discussed in the 'Compression technologies' part of Sect 2. Related Work to this study, as the authors do not test the compatibility of these methods with the proposed pruning technique. Additionally, there appears to be no citation for the studies mentioned in this subsection. * Figure 3, which illustrates the impact of each layer on LLMs, presents a concept that is already familiar in this field, as evidenced by similar graphs in ShortGPT, SLEB, and Shortened LLaMA. This raises questions about the value this graph adds to the paper. Furthermore, relevant citations are missing in line 274 on page 5, where the paper states, 'Similar to many current methods, we...'. Questions For Authors: Please refer to the sections on <Claims And Evidence> and <Experimental Designs Or Analyses>. I like several aspects of this work and appreciate the authors’ efforts in compressing LLMs. However, several justifications and ablation studies appear to be missing, and the experimental validation seems weak compared to relevant papers. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Q1: Proposed metric for MHA A1: We design similarity-based approach to enhance linear fitting by maximizing output similarity. To verify the effectiveness of our similarity-based approach, we employed the fluctuation metric from FLAP to evaluate the importance of heads, while aligning other configurations. From the results, It can be observed that our similarity-based strategy achieves a lower PPL. | method | ratio | wikitext2$\downarrow$ | PTB$\downarrow$ | Avg.$\uparrow$ | |------------------------|-------|-----------|-------|------------------| | fluctuation metric | 50% | 46.34 | 87.25 | 44.44 | | similarity-basd metric | 50% | 37.89 | 67.68 | 50.10 | Q2: Computations in Algorithm 1 A2: The computational cost of greedy search is acceptable. The time spent on greedy search is approximately 2 seconds for each layer on Llama-7B. Q3: Importance metric for the FFN A3: Yes, there is another reason. When pruning the input channels of the down projection, each channel's input activation only affects the magnitude of its output vector, while the direction is determined by the weights. For up and gate projection, their outputs correspond to the input activations of the down projection, and thus, we have not considered their impact on the direction of the final output matrix. Q4: Linear regression A4: A and B can be integrated into the output projection. Here, the weight matrix of the output projection is $A \cdot W_{o}$, and the bias term is $B + B_{o}$. Q5: r_0​ and alpha A5: Under the pruning ratio $r$ r_0 is calculated according to the following formula: $ r_{0} = r * $ pruned_layers / total_layers For alpha, we provide results for various values. Other models can use these results for adjustment. | alpha | ratio | wikitext2 | PTB | Avg.$\uparrow$ | |-------------|-------|-----------|-------|------------------| | 1 | 50% | 54.17 | 96.57 | 44.42 | | 4 | 50% | 43.36 | 83.91 | 44.95 | | 7 | 50% | 37.89 | 67.68 | 50.10 | | 10 | 50% | 44.82 | 75.80 | 46.86 | Q6: Performance A6: The experimental results show that the advantages of our model generally decrease after fine-tuning. This is mainly because the fine-tuning strategies of the two methods cannot be fully aligned. During compression, LORAP uses low-rank decomposition for MHA's linear layers, while LORA fine-tuning does not. LORAP reconstructs the decomposed matrices into a single matrix to keep MHA's parameter count unchanged. This increases the parameter count during LORA training. Q7: Calibration samples A7: We have added experiments to investigate the impact of the calibration set size on model performance. | alpha | ratio | wikitext2 | PTB | Avg.$\uparrow$ | |-------------|-------|-----------|-------|------------------| | 1 | 50% | 58.8 | 129.95| 46.50 | | 4 | 50% | 45.89 | 99.64 | 49.14 | | 8 | 50% | 45.98 | 99.25 | 47.89 | | 16 | 50% | 44.3 | 80.06 | 45.06 | | 32 | 50% | 37.89 | 67.68 | 50.10 | | 64 | 50% | 39.56 | 72.05 | 49.31 | Q8: Experiments A8: To demonstrate the generalizability of our strategy, we have included experimental results on the Vicuna-7B. | Methods | ratio | BoolQ | PIQA | HellaSwag | WinoGrande | ARC-e | ARC-c | OBQA | Average | |--------------------|-------|-------|-------|-----------|------------|-------|-------|-------|---------| | Vicuna-7B | 0% | 75.69 | 77.91 | 71.04 | 67.80 | 68.98 | 40.7 | 42.20 | 63.47 | | SLimLLM w/o tune | 20% | 74.92 | 76.12 | 67.98 | 65.82 | 67.09 | 39.33 | 42.60 | 61.98 | | SLimLLM w/ tune | 20% | 76.15 | 76.39 | 69.32 | 64.72 | 68.56 | 39.25 | 41.80 | 62.31 | Q9: 13B model A9: Thank you for your suggestion. We have added experiments on LLaMA-13B. | Methods | ratio | BoolQ | PIQA | HellaSwag | WinoGrande | ARC-e | ARC-c | OBQA | Average | |--------------------|-------|-------|-------|-----------|------------|-------|-------|-------|---------| | Llama-13B | 0% | 68.47 | 78.89 | 76.24 | 70.09 | 74.58 | 44.54 | 42.00 | 64.97 | | LoRAP w/o tune | 20% | 73.94 | 77.31 | 74.93 | 69.69 | 70.79 | 40.44 | 41.40 | 64.07 | | SlimLLM w/o tune | 20% | 74.13 | 77.53 | 74.73 | 69.30 | 70.45 | 42.32 | 41.00 | 64.21 | Q10: Latency A10: The framwork is Vanilla HuggingFace. To test decoding latency, we set the batch size to 8 and measured the latency for generating 256 tokens. The results are as follows: | ratio | latency(s) | |-------------|------------| | 0% | 13.12 | | 20% | 11.48 | | 50% | 9.38 | Others: Thank you for comments on writing, we will revise it carefully. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal and the additional experiments. I believe my initial score may have been somewhat harsh, as I was unsure about the competitiveness of your work. However, your thorough responses and the comments from other reviewers have alleviated my concerns. I would like to increase my score from 1 to 3 for the following reasons: * During the re-read, similarity-based MHA head pruning and regression-based performance recovery feel interesting again. Also, the authors describe the greedy search cost for head pruning, which does not seem to take long (A2). * The comparison with FLAP appears to be impressive (A1), and the additional matrices for performance recovery (A4) appear to be integrable, ensuring compatibility with existing serving frameworks. * Furthermore, several ablations strengthen the experimental validation. However, I am still concerned about the settings for r_0 and alpha (A5), i.e., whether these values are applicable to other models, the effectiveness over LoRAP for Llama-2-7B and Llama-1-13B, and a concern that the models examined seem a bit outdated. Releasing the code would be beneficial for verifying reproducibility and facilitating future work.
Summary: This paper proposes SlimLLM, a structured pruning method for LLMs that evaluates redundancy via Pearson similarity-driven head pruning and PCA-guided FFN channel pruning. A lightweight linear recalibration reduces post-pruning accuracy loss, while dynamic layer sparsity optimizes resource allocation. Experiments show strong performance on LLaMA models, surpassing prior methods. Claims And Evidence: The claims made in the paper are well-supported. Methods And Evaluation Criteria: The evaluation criteria and datasets are adequate for this task. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental designs and analyses appear to be solid. The proposed method is evaluated on various benchmarks and extensive ablation studies are provided. Supplementary Material: No additional supplementary materials. Relation To Broader Scientific Literature: SlimLLM provides a novel and accurate method for pruning LLMs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** 1. Eigenvector-guided PCA preserves critical feature directions in FFN layers, surpassing magnitude-only criteria. 2. The proposed method demonstrates stronger performance than prior methods like LoRAP and LLM-Pruner. 3. The linear regression strategy shows great impact for performance recovery, as demonstrated in Table 5. **Weaknesses** 1.This paper should be carefully proofread. For example, what is $o_{-hi}$ in Figure 1? $S_{-p}$ is not easy to understand in Algorithm 1 and should be replaced by better equation format. 2.What is exactly the latency reported in Table 3? Prefill latency or Decoding latency? More details should be included. 3.The proposed method is evaluated for LLMs. Is it possible that this method could also prune vision-language models? Other Comments Or Suggestions: For LLaMA2-7B, the proposed method obtains slightly poorer performance than prior methods on pruning ratio of 50% with finetuning. More analysis and discussions are encouraged. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Q1: This paper should be carefully proofread. For example, what is $O_{-h_{i}}$ in Figure 1? $S_{-p}$ is not easy to understand in Algorithm 1 and should be replaced by better equation format. A1: Thank you for your suggestion. $O_{-h_{i}}$ denotes the output excluding the $i-th$ head. $S_{-p}$ represents the set of unpruned attention head. $S_{-p}$={$head_{i}$, $i \in$ unpruned head index}. The detailed descriptions will be added in the paper. Q2: What is exactly the latency reported in Table 3? Prefill latency or Decoding latency? More details should be included. A2: Thank you for your suggestion. We show the prefill latency in Table 3 and we will add details in paper. Additionally, we conducted experiments to measure the decoding latency. The table below shows the latency results for generating 256 tokens with a batch size of 8. | ratio | #Param | latency(s) | |-------------|----------|------------| | 0% | 6.7B | 13.12 | | 20% | 5.4B | 11.48 | | 50% | 3.4B | 9.38 | Q3: The proposed method is evaluated for LLMs. Is it possible that this method could also prune vision-language models? A3: Yes, it is. For VLM (Visual-Language Model) models, the pruning strategy needs to be adapted, such as the proportion of visual tokens to text tokens. We are currently conducting preliminary experiments on Qwen-VL-7B, and the experimental results are shown in the table. We will present more details in our future work. | Datasets | prune ratio | MME | MMBench_dev_en | |---------------------|--------------|---------|----------------| | Qwen-VL-7B | 0% |2292.88 | 77.9 | | Qwen-VL-7B w/o tune | 20% |2047.691 | 74.9 | --- Rebuttal Comment 1.1: Comment: The authors totally address my concern, I will raise my score to 4.
Summary: This paper proposes SlimLLM for pruning large language models (LLMs). The method uniquely combines Pearson correlation analysis for attention head redundancy detection with PCA-based directional importance for FFN channel pruning. A lightweight linear calibration technique minimizes post-pruning performance degradation, while layer-specific sparsity allocation leverages input-output alignment. Evaluations on LLaMA-7B and LLaMA2-7B demonstrate significant efficiency gains while maintaining competitive accuracy on commonsense reasoning tasks. Claims And Evidence: The authors claim SlimLLM achieves state-of-the-art structured pruning for LLMs. This is substantiated by Table 1, where SlimLLM outperforms LoRAP by 2.85% average accuracy at 50% pruning without fine-tuning, and Table 3, which highlights a 3.4× speedup on LLaMA-7B. Methods And Evaluation Criteria: The framework is evaluated on various tasks (BoolQ, PIQA etc.) using LLaMA-family models. Latency measurements on NVIDIA V100 GPUs confirm practical applicability. Theoretical Claims: None explicitly stated. Experimental Designs Or Analyses: This paper provides extensive experimental results and ablation studies. Supplementary Material: NA. Relation To Broader Scientific Literature: LLM pruning is a popular topic and this paper proposes a novel method that achieves strong performance. Essential References Not Discussed: NA. Other Strengths And Weaknesses: **Strengths** - PCA-based feature importance for FFN channels addresses a critical limitation of magnitude-only pruning, preserving directional information. - The proposed method achieves great improvement for running speed, making it viable for real-time applications. - The linear regression strategy is an interesting idea and recovers performance with negligible computational overhead. **Weaknesses** 1. Computational complexity of iterative head pruning (Algorithm 1) is unaddressed, raising concerns for larger models. 2. Linear regression is applied only to output matrices. Is it possible for extending it to intermediate layers for further reducing accuracy loss. Other Comments Or Suggestions: See weakness part. Questions For Authors: How does the computational cost of greedy search scale with model size? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Q1: Computational complexity of iterative head pruning (Algorithm 1) is unaddressed, raising concerns for larger models. A1: The computational complexity of Algorithm 1 is acceptable. First, the number of heads is generally small, which limits the number of iterations in the algorithm. Second, the outputs of each head can be precomputed and each iteration only involves the calculation of similarity, which reduces the number of redundant calculations during the iterations. For each layer, the time spent on greedy search is approximately 2 seconds when pruning Llama-7B. Q2: Linear regression is applied only to output matrices. Is it possible for extending it to intermediate layers for further reducing accuracy loss. A2: We compensate for the pruning error by performing linear fitting between the outputs before and after pruning. This requires that the outputs of the two stages maintain the same dimensionality. Currently, the output matrices of MHA and FFN meet this requirement. However, for other linear layers, since the output channels have been pruned, this method cannot be directly applied at present.
Summary: This paper proposes a structured pruning method for large language models (LLMs) that compresses both the feed-forward (FFN) and attention layers to accelerate inference. The pruning algorithm incorporates two key techniques: (1) removing redundant attention heads based on Pearson similarity and (2) pruning FFN layers using principal component analysis (PCA) on feature representations. After pruning, the method employs linear regression to efficiently recover the pruned weights by minimizing reconstruction error. The paper evaluates the approach on the LLaMA-7B model, assessing performance across benchmarks including WikiText, PTB, and BoolQ, among others. Experimental results demonstrate that SlimLLM outperforms baselines such as LLM-Pruner, LoRAPrune, and LoRAP. ## update after rebuttal Thanks for the response. I will keep my initial positive score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical results in this paper. Experimental Designs Or Analyses: Yes. Supplementary Material: Supplementary materials not available. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Strengths: 1. SlimLLM is a simple yet highly practical structured pruning method. It requires no additional training and can be easily applied to various large language models (LLMs), making it a scalable solution for real-world deployment. 2. The proposed fast recovery method achieves performance levels comparable to low-cost fine-tuning. For example, at a 50% pruning rate, the LoRA-based LLM-Pruner attains a perplexity (PPL) of 38.12 on WikiText, while SlimLLM achieves a slightly lower PPL of 37.89, demonstrating its effectiveness in maintaining model accuracy despite significant compression. 3. Unlike uniform pruning approaches, SlimLLM incorporates a non-uniform pruning ratio, dynamically adjusting layerwise pruning based on cosine similarity between input and output representations. ## Weaknesses: 1. Although the proposed sub-methods have been proven effective, the relation between FFN pruning and attention layers pruning is no so clear. Are they independent methods? 2. The paper mainly focuses on LLaMA models. It is encouraged to conduct more experiments on other models. 3. The performance gain appears to be not so significant at a 20% pruning ratio, as shown in Table 5. This suggests that SlimLLM might be more beneficial for higher compression levels but less impactful for moderate pruning. A deeper investigation into why performance varies across different pruning ratios would be valuable. 4. The layer-wise pruning ratio relies on empirical hyperparameter α, lacking theoretical justification and more ablation studies. Other Comments Or Suggestions: N/A Questions For Authors: Please see the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Q1: Although the proposed sub-methods have been proven effective, the relation between FFN pruning and attention layers pruning is no so clear. Are they independent methods? A1: In this method, both the head pruning and channel pruning strategies are designed to increase the linear correlation between the outputs before and after pruning, which benefits our linear regression strategy. Meanwhile, considering the significant difference in the number of heads and channels, we employ the PCA method in the FFN to make the pruned output as close as possible to the principal direction. In the MHA, we directly maximize the Pearson similarity to bring the similarity closer. Q2: The paper mainly focuses on LLaMA models. It is encouraged to conduct more experiments on other models. A2: Thank you for your suggestion. We have added the experimental results on Vicuna-7B. | Methods | ratio | BoolQ | PIQA | HellaSwag | WinoGrande | ARC-e | ARC-c | OBQA | Average | |--------------------|-------|-------|-------|-----------|------------|-------|-------|-------|---------| | Vicuna-7B | 0% | 75.69 | 77.91 | 71.04 | 67.80 | 68.98 | 40.7 | 42.20 | 63.47 | | LLMPruner w/o tune | 20% | 62.87 | 75.41 | 64.00 | 58.41 | 60.98 | 37.12 | 39.00 | 56.83 | | LLMPruner w/ tune | 20% | 60.40 | 75.63 | 65.45 | 63.22 | 63.05 | 37.71 | 39.00 | 57.78 | | LoRAP w/o tune | 20% | 76.42 | 76.38 | 68.31 | 64.96 | 65.82 | 37.29 | 38.60 | 61.11 | | LoRAP w/ tune | 20% | 75.81 | 76.77 | 68.39 | 65.04 | 70.08 | 39.33 | 39.20 | 62.09 | | SLimLLM w/o tune | 20% | 74.92 | 76.12 | 67.98 | 65.82 | 67.09 | 39.33 | 42.60 | 61.98 | | SLimLLM w/ tune | 20% | 76.15 | 76.39 | 69.32 | 64.72 | 68.56 | 39.25 | 41.80 | 62.31 | Q3: The performance gain appears to be not so significant at a 20% pruning ratio, as shown in Table 5. This suggests that SlimLLM might be more beneficial for higher compression levels but less impactful for moderate pruning. A deeper investigation into why performance varies across different pruning ratios would be valuable. A3: We posit that the impact on model performance is comparatively minimal at a pruning ratio of 20%, whereas the pruning error becomes significantly more pronounced at a pruning ratio of 50%. Our proposed strategy is capable of effectively mitigating this error, thereby yielding more substantial performance improvements. Q4: The layer-wise pruning ratio relies on empirical hyperparameter α, lacking theoretical justification and more ablation studies. A4: Thank you for your suggestion. We have added ablation experiments on Llama-7B, and the results are as follows. When the value of alpha increases to 7, the model achieves optimal performance. Further increment of alpha leads to a decline in model performance. | alpha | ratio | wikitext2$\downarrow$ | PTB$\downarrow$ | Avg.$\uparrow$ | |-------------|-------|-----------|-------|-----------------| | 1 | 50% | 54.17 | 96.57 | 44.42 | | 4 | 50% | 43.36 | 83.91 | 44.95 | | 7 | 50% | 37.89 | 67.68 | 50.10 | | 10 | 50% | 44.82 | 75.80 | 46.86 |
Summary: The paper presents SlimLLM, a structured pruning approach for large language models (LLMs) that tackles channel and attention head pruning through a unified importance evaluation framework. The paper introduces several novel techniques, including using Pearson similarity to identify redundant attention heads with a greedy search strategy, evaluating FFN channel importance via PCA to prioritize feature directions, and a layerwise pruning strategy that adjusts sparsity based on input-output cosine similarity. Evaluated on LLaMA-7B/2-7B, SlimLLM achieves better accuracy and lower perplexity than baselines like LoRAP. Claims And Evidence: The claims made in the paper are well-supported by extensive comparisons and ablation studies. Methods And Evaluation Criteria: The proposed method is evaluated on various benchmarks for LLaMA-7B and LLaMA2-7B. Evaluation focuses on zero-shot accuracy across commonsense reasoning datasets (e.g., BoolQ, PIQA), perplexity (WikiText2, PTB), and inference latency. Theoretical Claims: The paper does not provide too much theoretical analysis. Experimental Designs Or Analyses: The experimental setup is reasonable, with all comparative experiments conducted under the same conditions. Additionally, extensive ablation studies are provided to demonstrate the effectiveness of each proposed module. Supplementary Material: NA. Relation To Broader Scientific Literature: The paper focuses on LLM pruning, a critical problem in the field that has garnered significant attention from both the academic community and industry. Essential References Not Discussed: NA. Other Strengths And Weaknesses: Strengths 1. SlimLLM introduces novel criteria for structured pruning, such as Pearson similarity-based head pruning and PCA-driven feature space importance for FFN channels. These methods move beyond element-wise aggregation, capturing interdependencies within sub-modules and providing a more comprehensive assessment of redundancy. The greedy search for head combinations further enhances this approach, addressing limitations of prior works like LoRAP that rely on weight magnitude or activation norms. 2. The proposed linear regression strategy for output matrix fine-tuning is both simple and effective. It achieves significant performance recovery with negligible computational cost, avoiding complex retraining. 3. The paper provides comprehensive comparative experiments and ablation studies to justify the effectiveness of the proposed method. Weaknesses: 1. The paper focuses on pruning-based acceleration of LLMs but lacks a comparison with quantization-based acceleration methods. Including such a comparison would better demonstrate the proposed approach's advantages of LLM compressing techniques. 2. Table 2 only compares the proposed method with LoRAP. Expanding the evaluation to include more state-of-the-art pruning or compression baselines would strengthen the demonstration of the method's effectiveness and generality. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Q1: The paper focuses on pruning-based acceleration of LLMs but lacks a comparison with quantization-based acceleration methods. Including such a comparison would better demonstrate the proposed approach's advantages of LLM compressing techniques. A1: Thank you for your suggestion. Quantization is also a highly effective method for LLM compression. Since the directions of pruning and quantization as compression strategies are different and they can yield cumulative benefits, we have not compared our method with quantization at present. Q2: Table 2 only compares the proposed method with LoRAP. Expanding the evaluation to include more state-of-the-art pruning or compression baselines would strengthen the demonstration of the method's effectiveness and generality. A2: Thank you for your suggestion. In Table 2, we compared our method with the LoRAP method, which has excellent compression performance. However, due to memory limitations, we were unable to conduct experiments on LLMPruner with Llama2-7B. Below are the relevant results reproduced from other works, which show that our pruning strategy can better preserve model performance. | Methods | ratio | BoolQ | PIQA | HellaSwag | WinoGrande | ARC-e | ARC-c | OBQA | Average | |---------------|-------|-------|-------|-----------|------------|-------|-------|-------|---------| | LLaMA2-7B | 0% | 71.04 | 78.40 | 72.96 | 67.17 | 69.32 | 40.53 | 40.08 | 62.89 | | LoRAP | 20% | 69.24 | 76.39 | 69.15 | 65.11 | 61.99 | 35.58 | 38.60 | 59.44 | | LLMPruner | 20% | 67.95 | 77.58 | 71.43 | 64.01 | 63.51 | 38.05 | 39.80 | 60.33 | | SLimLLM(ours) | 20% | 69.79 | 76.28 | 68.88 | 63.54 | 65.74 | 39.08 | 39.80 | 60.44 |
null
null
null
null
NICE Data Selection for Instruction Tuning in LLMs with Non-differentiable Evaluation Metric
Accept (poster)
Summary: The paper proposes a data selection framework that computes the evaluation metric (e.g., reward) for validation samples and employs the Monte-Carlo policy gradient to calculate the influence of each training data point on the validation data. Claims And Evidence: See Other Strengths And Weaknesses Methods And Evaluation Criteria: See Other Strengths And Weaknesses Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: See Other Strengths And Weaknesses Supplementary Material: See Other Strengths And Weaknesses Relation To Broader Scientific Literature: See Other Strengths And Weaknesses Essential References Not Discussed: See Other Strengths And Weaknesses Other Strengths And Weaknesses: ### Strengths 1. The issue of data efficiency in instruction tuning datasets is an important topic in scaling LLMs. 2. The paper is easy to understand ### Weaknesses 1. The claim about validation loss vs. evaluation metrics may be imprecise. The authors primarily base their argument on a single experiment with a reward model (Figure 1). This raises concerns that the relationship between validation loss and evaluation metrics may not be generalizable and cause misunderstandings. 2. The motivation seems unclear. (1) The paper argues that high-quality labels are crucial yet often unavailable for loss-based data selection methods. It then relies on an additional model (e.g., a reward model or LLM judge) to generate the evaluation metric. How do the authors ensure the quality and reliability of these metrics? (2) There is insufficient evidence in the paper showing that the lack of labels is the main limitation of previous loss-based data selection methods. How will existing state-of-the-art loss-based data selection methods perform if established LLMs or [1] can be used to generate validation labels or according to the teacher-forcing loss? 3. Insufficient experiments to support the claim about the advantage of policy gradient compared to the gradient used in previous loss-based methods. Note that previous gradient-based methods might also be improved by incorporating multiple label responses from additional LLMs. Thus, it remains unclear whether the benefit arises from Monte Carlo sampling or from the introduced evaluation metric itself. 4. Some technical details remain unclear. For example, the paper projects the LoRA gradient into an 8192-dimensional vector in Line 290, but it does not illustrate the projection method used. 5. The choices of baselines may be outdated and less than ideal. (1) The paper introduces additional models to help data selection, which may create an unfair comparison against other baselines. (2) Some baselines (e.g., LESS) are specifically designed for target data selection, so applying them in a task-agnostic context may be inappropriate. I suggest the author further compare and discuss more recent data selection methods in each setting. 6. Monte Carlo sampling can be expensive, and the paper does not offer a comparative efficiency analysis of the computational cost for the proposed method and other baselines. [1] The ALCHEmist: Automated Labeling 500x CHEaper than LLM Data Annotators. In NIPS, 2024 ## update after rebuttal I appreciate that most of my concerns have been addressed. Consequently, I have increased my score to 3. I expect the authors to incorporate new results, analyses, and clarifications in the revised manuscript. Other Comments Or Suggestions: See Other Strengths And Weaknesses Questions For Authors: See Other Strengths And Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for highlighting the importance of our work on data efficiency in instruction tuning and for finding the paper easy to understand. Below are clarifications to address your concerns: 1. Since discrepancy between NTP loss & eval. metrics has been shown in previous works (lines 380-1 L for refs.), it explains why we only consider a single experiment to further validate this claim. Nevertheless, we conduct experiments on 3 other datasets with different reward functions (eval. metrics) to show that the discrepancy between loss & metrics can be generalized. The results in [link](https://postimg.cc/GBMkCw0q) are similar to that in Fig.1: checkpoints with minimal loss (highest negative loss) do not correspond to checkpoints with the best performance (measured by the eval. metric) (lines 394-6 L); the performance can continue to increase even if the negative loss decreases. 2. Our work can be motivated using two limitations of loss-based data selection: (a) discrepancy between NTP loss & eval. metrics as discussed in point 1 (lines 19-43 R) & (b) dependency on high-quality labels which can be unavailable in practice (lines 149-51 L). In contrast, NICE can select training data to directly optimize commonly used eval. metrics of generation tasks (lines 46-9 R) & alleviate the dependency on validation labels when the eval. metric is independent of labels (lines 64-8 L). 2.1. High-quality labels (ground-truth responses) are often unavailable or available only in small quantities for generation tasks. So, a common practice is to use an LLM judge or train a reward model based on the small quantity of high-quality labels, and assume they are reliable, high-quality metrics that generalize to other inputs. Thus, this does not contradict the importance of high-quality labels. In fact, prior works have shown that these eval. metrics can track the response quality & align well with humans (Bai et al. 2022; Dubois et al., 2024; Stiennon et al., 2020; Zheng et al., 2023) and are hence widely adopted for evaluating generation quality. 2.2. Tab.D below shows the performance of LESS using GPT-generated labels (as suggested, 2nd row), which is generally worse than LESS + true labels (1st row) and our approaches. Hence, the performance of loss-based data selection is hurt by the unavailability of true labels, highlighting their importance. Furthermore, using GPT-generated labels alone does not address the issue discussed in point 1. |Table D|Alpaca|TLDR|RLHF|pass@1|pass@10|pass@100| |-|-|-|-|-|-|-| |LESS|15.44±2.74|1.54±0.21|1.44±0.07|0.092±0.01|0.261±0.00|0.475±0.02| |LESS+GPT|15.07±0.91|1.47±0.60|3.03±0.01|0.043|0.188|0.417| |NICE|16.79±0.85|1.61±0.39|2.82±0.10|0.104±0.02|0.274±0.01|0.486±0.02| |NICE AMC|17.16±3.73|3.55±0.40|3.03±0.02|0.090±0.02|0.251±0.02|0.451±0.03| 3. Advantage of NICE (policy gradients) over LESS (loss-based gradients) is supported by experimental evidence in Tab.3: NICE generally outperforms LESS (lines 321-9 L). About the concern on MC sampling & the eval. metric $r$, note that the policy gradient is used to optimize $r$ with data selection and approximated by MC sampling. So, it won’t be meaningful if we consider MC sampling as a standalone component and remove the eval. metric (e.g., for an ablation study). To see this, not using the eval. metric is equivalent to treating all sampled responses equally, i.e., $r$ is a constant function, resulting in a policy gradient of 0, as derived in [link](https://postimg.cc/5jGSyF71). 4. We use Johnson-Lindenstrauss random projections (see lines 275-8 L). 5.1. In Tab.D above, we have provided the results of LESS + GPT-generated labels, making the comparison fair. However, as mentioned in point 2.2, LESS is using the true labels to outperform that with GPT-generated labels and thus has an unfair advantage (yet poorer performance) over our approach in TLDR, RLHF & HumanEval (lines 278-81 R). Moreover, NICE (without AMC) already outperforms LESS without the help of additional models (Tab.3). 5.2. To clarify, our “task-agnostic” & “task-specific” settings pertain to the data preparation stage (to form the initial pool of training data without & with the knowledge of the target task, resp.) **prior** to data selection (lines 245-8 R, 255-7 R). For data selection, this work focuses on targeted (task-specific) data selection (lines 102-5 R). Hence, all our baseline choices are fair and valid. We will clarify this in the revised paper. In addition, we have included 2 more baselines TSDS & DSIR in Tab.C of reviewer d8As’s rebuttal. 6. While NICE incurs more computational cost due to MC sampling, this trade-off is justified by not needing validation labels—a key motivation of this work—and its improved performance over other methods (Tab.3). We defer the computational analysis to reviewer waTE’s rebuttal. We will include the above discussions in the revised paper. We hope we have addressed your concerns and improved your opinion of our work. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts to address most of my concerns. However, after reviewing all comments and the rebuttal, I still tend to maintain my score due to the following concerns: The claims about this paper's “task-agnostic” & “task-specific” settings are inconsistent and might contradict existing work. The author claims that the “task-agnostic and task-specific settings pertain to the data preparation stage prior to data selection”. However, they later compared data selection methods based on task-agnostic and task-specific categories. Also, the explanations provided in both the paper (Lines 245–248 and 255–257) remain unclear. For example, LESS is described as a task-specific data selection method, but it utilizes mixed-source training datasets that are not collected specifically for the downstream evaluation task. In Table 3, some baselines like LESS are designed as target data selection methods, suggesting that applying them in a task-agnostic context could also be inappropriate. I am still concerned that these inconsistencies may lead to misunderstandings within the community. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reviewing our rebuttal and for allowing us to further clarify your concerns. **Problem setting:** We only focus on targeted data selection where we assume we have access to some validation data during data selection (lines 100-105, Left). Our “task-agnostic” and “task-specific” are used to describe the property of the **initial pool of training data** collected in the **data preparation** stage **prior** to data selection. Specifically, for the data preparation stage: - For the "task-agnostic" setting: the initial pool of training data is gathered **without** any knowledge of the target task. We used a mixed-source instruction-tuning training set in this setting. Intuitively, this mixed-source pool of data may contain many irrelevant data (e.g., assistant-style conversations) w.r.t. the targeted task (e.g., coding task). - For the "task-specific" setting: the initial pool of training data is gathered **with** an explicit focus on the target task. We select from the training data that mainly contains data relevant to the target task, e.g., selecting from codes (CodeAlpaca 20k) for a target coding task (HumanEval). **Perceived inconsistencies:** In the main paper, we did **not** explicitly categorize data selection methods using the terms “task-agnostic” and “task-specific” (even though they can be categorized in this way). As we focus on targeted data selection, we only and fairly compare methods that require the validation datasets and are applicable for **targeted** data selection. In addition, as our “task-agnostic” and “task-specific” settings refer to the property of the initial training data pool (**not** the availability of the validation data during the selection stage), all targeted data selection methods can be **appropriately** applied to these two types of pool of data without inconsistency. **Perceived contradictions with existing work**: We would also like to clarify that our settings do not contradict existing work. Our methodology aligns with established practices, such as those seen in previous works like LESS [1], which focus on "task-agnostic" data preparation, followed by targeted data selection. Specifically, our “task-agnostic” setting's mixed-source training dataset is the same training data used in LESS. In addition, we **extend** their experimental setup by including a "task-specific" (data preparation) setting, demonstrating the applicability of our approach (NICE) to more scenarios without contradicting the settings observed in prior studies. We will improve the clarity of the terms in the revised version. We hope that we have clarified that there is NO contradiction or inconsistency. If the confusion lies in the naming conventions, we can update “task-specific” to “task-aware” for the data preparation stage, which does not compromise the contributions of this work. [1] Xia, Mengzhou, et al. "Less: Selecting influential data for targeted instruction tuning." *arXiv preprint arXiv:2402.04333* (2024).
Summary: The paper proposes NICE, a RL based framework for choosing instruction tuning data targeted for downstream tasks. The proposed method uses reward signals such as loss function or influence function on the validation data. The policy gradient is then used to estimate the training data point's influence on the given validation sets. The authors conducted experiments on instruction tuning of LLMs such as llama and mistral models. Claims And Evidence: The authors claims their selection strategy is better than loss based strategy and other existing data selection strategies. There has also been other recent literature in this space such as https://arxiv.org/abs/2410.11303. The authors should compare to the methods proposed in this paper. The authors also claim their method works well in task agnostic settings, but fail to compare against some of the existing literature in this space, such as the methods that use DPP (https://arxiv.org/pdf/2402.02318) and submodular optimization (https://arxiv.org/abs/2401.06692). Methods And Evaluation Criteria: The methods and evaluation make sense and are quite thorough other than the baselines they compare against. Theoretical Claims: I did check the theoretical derivations. Experimental Designs Or Analyses: I did not find the specific details when the labels are unavailable. I think the paper could benefit from a better description what what was used in place of the labels $y$ in the experiment section. Supplementary Material: I did check the math derivation. Relation To Broader Scientific Literature: The paper falls under the general data selection for LLM instruction tuning. Essential References Not Discussed: In addition to the papers mentioned above, the authors are also encouraged to check out other methods mentioned in https://arxiv.org/pdf/2402.16827. Other Strengths And Weaknesses: Overall, the paper is very well written. I am willing to raise my score given the experiment comparisons with other baselines are conducted. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback on our writing and the thoroughness of our methods and evaluations. We wish to make the following clarification. **Claims And Evidence:** To clarify, our “task-agnostic” & “task-specific” settings pertain to the data preparation stage (to form the initial pool of training data without & with the knowledge of the target task, resp.) **prior** to data selection (lines 245-8 R, 255-7 R). For data selection, this work only focuses on targeted (task-specific) data selection (lines 102-5 R). However, methods based on DPP [https://arxiv.org/pdf/2402.02318] and submodular optimization [https://arxiv.org/abs/2401.06692] focus on non-targeted (task-agnostic) data selection without validation data, which is a different problem setting that falls outside the scope of our work. We will improve the clarity in the revised paper. To improve the comprehensiveness of the experiments for targeted data selection, we add TSDS [https://arxiv.org/abs/2410.11303] (as suggested) and DSIR (highlighted in the suggested survey [https://arxiv.org/pdf/2402.16827] [1] ) for the setting where the initial pool of training data does not have the knowledge of the target task (our "task-agnostic" setting). The results are shown in Tab. C below. Across multiple datasets and models, **our method generally continues to outperform the newly added baselines**. Although DSIR performs the best on AlpacaEval with the LLaMA2-7B model, it selects data based on n-gram lexical feature matching (similar to BM25) between training and target distributions, independent of model-specific signals. As a result, it selects the same subset regardless of the model used for instruction tuning, which can be sub-optimal. We can see it is no longer the best with Mistral-7B on the same task. This highlights the advantage of our model-aware data selection strategy. We will incorporate these baselines and citations in the revised paper. |Table C|Alpaca|TLDR|RLHF|pass@1|pass@10|pass@100| |-|-|-|-|-|-|-| |**Llama**| |TSDS|11.42±1.33|1.42±0.07|1.01±0.12|0.1030±0.02|0.2547±0.01|0.4368±0.02| |DSIR|20.27|1.53|2.57|0.0953|0.2402|0.4222| |NICE|16.79±0.85|1.61±0.39|2.82±0.10|0.1035±0.02|0.2737±0.01|0.4859±0.02| |NICE AMC|17.16±3.73|3.55±0.40|3.03±0.02|0.0904±0.02|0.2511±0.02|0.4510±0.03| |**Mistral**| |TSDS|25.91±2.49|3.47±0.00|1.83±0.15|0.2750±0.01|0.5978±0.02|0.8278±0.01| |DSIR|29.31|3.48|2.94|0.2771|0.5681|0.7917| |NICE|26.20±4.13|3.31±0.35|3.10±0.06|0.2948±0.01|0.6205±0.02|0.8559±0.01| |NICE AMC|31.05±1.23|4.60±0.20|3.42±0.05|0.2996±0.02|0.6210±0.02|0.8567±0.00| **Experimental Designs Or Analyses:** We would like to clarify that for three tasks—TLDR, RLHF, and HumanEval—the ground-truth labels of the validation dataset are not used by our method, although they are available to the baselines (lines 305–18, L). For AlpacaEval, labels are provided to all approaches. Specifically, NICE relies solely on the probability of generated responses and the score from a reward function, as enabled by the policy gradient mechanism (lines 130–132, R). In these three tasks, the **reward function does not require ground-truth labels**. Specifically: - For RLHF and TLDR, the reward function is a learned reward model that outputs scores based on the prompt and generated response (lines 160-162, L). - For HumanEval, the reward is defined by whether the generated code passes unit tests, not requiring reference solutions (lines 245-250, L). We will make this distinction clearer in the experiment section to avoid confusion. **Essential References Not Discussed:** We have checked the survey paper in https://arxiv.org/pdf/2402.16827 and added a related work DSIR as a baseline for comparison. We will incorporate the citations of the survey paper and works that are not directly comparable but related like DPP, SKILL-IT[2] in the related work section. We thank the reviewer for the valuable comments. We hope the additional baselines make our comparison more thorough and improve your impression of our work. [1] Xie, Sang Michael, et al. "Data selection for language models via importance resampling." *Advances in Neural Information Processing Systems* 36 (2023): 34201-34227. [2] Chen, Mayee, et al. "Skill-it! a data-driven skills framework for understanding and training language models." Advances in Neural Information Processing Systems 36 (2023): 36000-36040. --- Rebuttal Comment 1.1: Comment: Could you please clarify how the validation data (for the targeted selection) makes a different for task-agnostic settings? Intuitively, the task agnostic setting does seem to benefit much from having separate validation set since the data are all iid? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the question and for allowing us to clarify. Our “task-agnostic“ setting pertains to the **data preparation stage**, where the initial pool of training data is constructed *without any knowledge of the target task or validation data*. This ensures that the process of collecting this initial pool of training data remains entirely task-agnostic. As such, the validation data (for the targeted selection) does not provide any benefit at this stage since it is not used. On the other hand, the validation data/set (for the targeted selection) is utilized **during the data selection stage**, where the objective is to identify a subset of training data that maximizes performance on the target task’s validation set and supports the desired target capability (e.g., coding, summarization). For example, NICE selects training examples whose gradients are more similar to the policy gradient of validation data points. Including such examples improves validation performance and thereby supports better performance for the target task. Therefore, while the validation data/set **does not make a difference during the task-agnostic data preparation stage (our “task-agnostic” setting)**, it is crucial during the **targeted selection stage** by informing our algorithm to select training data that improve performance for the target task. We hope this clarifies our definition of the “task-agnostic” setting and the role of validation data in our experiments. We also hope it helps to address your concern and can improve your opinion of our work.
Summary: This paper proposes NICE, which provides an innovative and label-efficient approach to data selection by using policy gradients for non-differentiable evaluation metrics, outperforming several baselines in many benchmarks, including both task-specific and -agnostic settings. While its computational cost and complexity may limit its scalability, it seemly shows promise for improving model performance. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Code. Relation To Broader Scientific Literature: Please refer to the Questions. Essential References Not Discussed: By following LESS, this paper introduces a data influence selection strategy based on evaluation metrics rather than loss or gradients that can be applied to both task-specific and -agnostic settings. I believe it would be beneficial to incorporate a discussion and experimental performance comparison with [1], even though [1] approaches data selection from the perspective of distribution matching. [1]TSDS: Data Selection for Task-Specific Model Finetuning Other Strengths And Weaknesses: Strengths: The authors' insight that there is no direct correlation between the validation set loss and the final test performance is reasonable and profound. Instead, directly using the performance reward for data selection and solving it through reinforcement learning is highly promising and adds innovation to this paper. Weaknesses: 1. In the paper, RL is used to solve the data selection problem based on performance rewards, which involves computationally expensive steps such as MC sampling. This introduces significant computational cost and instability. Is there a more in-depth discussion on the computational cost and stability? This could affect the reproducibility of the work and its subsequent use in future research. 2. Is there any potential data leakage in the experiments? For example, in Line 258, it is mentioned that the validation set of AlpacaEval and HumanEval share the same distributions as the corresponding test sets. If we were to construct a validation set from a subset of the training dataset without any validation data and without knowledge of the downstream evaluation data, how would the model perform in the task-agnostic setting? 3. There appears to be some inconsistency in the performance between NICE and NICE_AMC in Table 3. How can this be explained? Additionally, could the authors provide some theoretical guidance on when to choose NICE over NICE_AMC or vice versa? Other Comments Or Suggestions: Please refer to the Questions. Questions For Authors: Please refer to the Questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our insight and our RL-based data selection approach. Please find our responses below. **Essential References Not Discussed:** We have included TSDS (as suggested) and an additional baseline, DSIR in Tab. C of our response to Reviewer d8As. Our method outperforms the new baselines across diverse datasets and models. **Weaknesses:** 1.1 **Computation cost** Although Monte Carlo sampling increases the computational cost, **this trade-off is justified by NICE not needing validation labels** - a key motivation of our work. NICE **fills a gap left by existing loss-based baselines by supporting data selection with unlabeled validation data** in cases where the evaluation metrics are label-independent. Furthermore, we can observe the performance improvement over other methods in Tab.3. Nevertheless, we provide a comparative analysis of the computational costs between NICE and LESS, showing **NICE remains within a practical computational range:** Tab. A lists the asymptotic complexity and wall-clock runtime (training is measured on a single H100 hour, others are on L40 GPU hours) for each stage in the data selection procedure. Tab. B highlights the validation gradient computation where NICE differs from NICE. Let $E, d, M$ denote number of epochs (saved checkpoints), dimension of the projected gradients, number of MC samples. Let $D_W, D_N, D_V$ denote the warmup, training, validation set. When the sizes of the validation set and the MC samples are small, NICE adds only marginal overhead to LESS (e.g., AlpacaEval). |Table A|Warmup LoRA|Training Grad Comp|Validation Grad Comp|Data Selection| |-|-|-|-|-| |Remark|LESS=NICE|LESS=NICE|NICE>LESS|LESS=NICE| |Compute|$O(\|D_W\|E)$ 3h|$O(\|D_N\|E)$ 48h|LESS:$O(\|D_V\|E)$ 0.11h on average; NICE: $O(\|D_V\|EM)$ 14.67h on average|$O(\|D_N\|\|D_V\|d)$ < 0.02h| |Table B| |NICE|NICE|LESS| |-|-|-|-|-| |$\|D_V\|​$|$M$​|MC Sample|Val Grad|Val Grad| |Alpaca=10|20|0.17h|0.05h|<0.02h| |TLDR=322|20|8h|1.47h|0.08h| |RLHF=2192|20|32h|10h|0.33h| |HumanEval=10|500|5h|2h|<0.02h| 1.2 **Stability** We provide an ablation study with results in https://postimg.cc/Mf0N6b76, varying MC samples from 5 to 20 on the RLHF dataset for a more in-depth discussion on stability. Results show that increasing the number of MC samples $M$ generally lowers the standard deviation across runs with different seeds, indicating better stability. The benefit of reduced standard deviation diminishes as $M$ increases. This validates that our chosen $M$ provides a good trade-off, offering sufficient stability without excessive computation. 2. We ensure there is no data leakage in our experimental setup, since the test data has no overlap with the validation data. Additionally, we clarify that similar to LESS and TSDS, our work focuses on targeted data selection (lines 102-5 R), which explicitly assumes access to a validation set that is distributionally aligned with the target task. Our “task-agnostic” & “task-specific” settings in the experiments pertain to the **data preparation** stage (to form the initial pool of training data without & with the knowledge of the target task, resp.) **prior** to data selection (lines 245-8 R, 255-7 R). We believe that selecting training data without the knowledge of the downstream evaluation data falls under a different setting—task-agnostic data selection—where alternative approaches based on perplexity or coreset are applicable. However, these directions are outside the scope of this work, focusing on targeted data selection. 3. NICE_AMC generally performs well under two conditions: (a) when a stronger model is available for the target task—one that can generate higher-quality Monte Carlo samples, and (b) when the training pool is sufficiently large, allowing better gradient alignment with NICE_AMC (lines 313–5, R). This explains its success in our task-agnostic (data preparation) setting with abundant data. As for guidance on when to use NICE vs. NICE_AMC: - NICE_AMC: a stronger assistance model is available + a large, diverse training pool can be leveraged. - NICE: resources or training data are limited or when the base model is already powerful enough. In the revision, we will include additional baselines and analyses, and clarify the use case of NICE_AMC. We hope the discussion above will address your concerns and improve your impression of our work. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' feedback and I don't have other concerns.
Summary: The paper introduces NICE (Non-differentiable evaluation metric-based InfluenCe Estimation) for selecting training data to improve the performance of large language models (LLMs) on specific tasks. The method leverages policy gradient techniques to optimize non-differentiable evaluation metrics directly, addressing the limitations of existing loss-based influence estimation methods. Claims And Evidence: Supported: 1. Effectiveness of NICE: The paper claims that NICE outperforms existing data selection baselines across diverse scenarios. 2. The authors argue that optimizing non-differentiable evaluation metrics directly leads to better data selection than traditional loss-based methods. The experimental results, particularly in tasks requiring long-form generation, support this claim by demonstrating improved performance metrics. 3. The paper claims that NICE can perform data selection without labeled validation data when the reward function does not require labels. Problematic: 1. While the paper acknowledges the computational cost of NICE, it claims that the method is efficient due to the use of LoRA and random projection. However, there is a lack of detailed analysis or evidence comparing the computational cost of NICE with other methods LESS or BM25. Methods And Evaluation Criteria: Overall, the proposed methods and evaluation criteria are well-aligned with the problem of optimizing data selection for instruction tuning in LLMs. They address the key challenges and provide a comprehensive framework for evaluating the effectiveness of the proposed approach. Theoretical Claims: The theoretical claims in the paper are based on sound principles from reinforcement learning and influence estimation. I found that the connection between policy gradients and influence estimation is intuitive but lacks formal grounding. The paper does not clarify why policy gradients are theoretically suitable for influence estimation. Experimental Designs Or Analyses: 1. The computational cost of Monte Carlo sampling (e.g., 500 samples for HumanEval) and reliance on GPT-4 for AMC are mentioned but not quantified. I also wonder about the cost of use of GPT4 API. Can your method also work well with other open-source or smaller LLMs? You may also discuss trade-offs between sample size and performance. Supplementary Material: Yes, appendix. Relation To Broader Scientific Literature: 1. NICE advances the field by directly optimizing for task-specific performance metrics, which is a novel approach compared to traditional methods that rely on proxy metrics like next-token prediction loss. 2. By applying policy gradient techniques to data selection, the paper introduces a new way to estimate the influence of training data on model performance, bridging the gap between reinforcement learning and data selection in NLP. Essential References Not Discussed: None. Other Strengths And Weaknesses: Please see the comments in previous sections. Other Comments Or Suggestions: 1. While the paper notes that larger subsets can harm performance (Fig. 3), it does not systematically characterize "harmful" data (e.g., via qualitative examples). Questions For Authors: Please see the comments in previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the effectiveness of NICE and the soundness of our methods and evaluation. **Claims And Evidence:** We defer the computational analysis between LESS and NICE to our rebuttal for Reviewer waTE. For BM25, it is indeed an efficient retrieval method based on lexical matching, ranking training data by relevance to the validation data. However, **BM25 is model-agnostic** (i.e., it selects the same data for different models). Thus, a subset selected may perform well with one model but not necessarily be optimal with another (lines 286-94, R). While being **model-agnostic contributes to BM25's efficiency, it also limits its performance**. In contrast, NICE enables model-aware data selection, optimizing training data specifically for the target model’s validation performance. **Theoretical Claims:** The theoretical suitability of policy gradient for influence estimation can be justified as follows. Loss-based influence estimation methods (e.g., TracIn, influence function), which our method builds upon, estimate the influence of a training point on validation loss. In particular, these methods measure the influence via the “gradient of the validation loss” (the change in validation loss w.r.t. the model parameters / policy), which by the **chain rule**, can be combined with the the first-order gradient or the Hessian of the training loss (w.r.t. the model parameters). In contrast, we estimate the influence on **validation performance**, measured by evaluation metrics. However, since the evaluation metrics are non-differentiable, the policy gradient, which measures the change in validation performance caused by the corresponding training data point, is a direct replacement of the “gradient of the validation loss”. It also allows the estimating the influence of the training data via chain rule. Moreover, we show elaborate the derivations for NICE (based on TracIn) and NICE_IF (based on influence functions) in lines 141–52 R and lines 201-19 L, respectively. Both of them also relies on the derivations shown in App A.5.1 and App A.5.2, respectively. Therefore, our NICE is theoretically-grounded since it uses the principled influence estimation framework from prior work and extends it to calculate the influence of data on the non-differentiable metrics using policy gradient. **Experimental Designs Or Analyses:** **Cost of GPT-4.** Note that NICE AMC is an optional enhancement—NICE itself does not require GPT-4. We list the projected GPT-4 cost for NICE AMC in the table below. The costs are low for the majority of the tasks, except for RLHF due to its large validation set (which can be addressed as below). ||AlpacaEval|TLDR|RLHF|Humaneval|Avg| |-|-|-|-|-|-| |GPT Cost($)|1.70|14.26|291.17|6.34|78.37| **Use of open-source/smaller LLMs.** As suggested, to reduce cost, we can use high-performing open-source models. On the RLHF dataset, we use Qwen 2.5-7B/3B-Instruct for AMC. Both outperform NICE. Notably, even Qwen 2.5-3B performs better due to its better alignment training, despite its smaller size. These models offer comparable performance to GPT-4 without the API cost. ||NICE|NICE AMC (GPT4)|NICE AMC (Qwen2.5 7B)|NICE AMC (Qwen2.5 3B)| |-|-|-|-|-| |RLHF|2.82±0.10|3.03±0.02|3.00±0.03|2.97±0.03| **500 Monte Carlo samples on HumanEval.** We use 500 samples because HumanEval is challenging—correct generations are rare, and we aim to estimate pass@100, whose evaluation requires 100 generated code pieces. In contrast, other tasks are evaluated using one sample, and we use 20 samples (i.e., a multiplier of 1) to approximate expected performance under the current policy. **MC sample size vs. performance trade-off.** We perform experiments on HumanEval with MC sample sizes ranging from 200 to 500 (see Fig. in [link](https://postimg.cc/ZCdzgpBM)), showing a clear performance improvement with more samples. We also analyze this trade-off in Sec. 4.5 (lines 374-84, R). Both HumanEval and RLHF results show performance can improve with sample size. On RLHF, 20 samples already yield strong performance, with additional gains as the size increases. **Other Comments Or Suggestions:** We provide qualitative examples in App. A.10 (pages 23–24). These examples illustrate that “harmful” data can include paraphrased versions of the questions or fail to provide useful information. We will include additional experiments in the revised paper and hope our justifications will address your concerns and improve your opinion of our work. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I found that most of my concerns has been addressed.
null
null
null
null
null
null
WGFormer: An SE(3)-Transformer Driven by Wasserstein Gradient Flows for Molecular Ground-State Conformation Prediction
Accept (poster)
Summary: This paper proposes a transformer-based neural network architecture to predict the ground state conformation of molecules from some low quality conformation (equivalent to optimization). The encoder part of the model takes as input per-atom embeddings (depending only on atom types) and atom-pair embeddings (distances after Gaussian kernel expansion with some learned weight), and applies the attention mechanism for multiple layers to generate the final representation for each atom. Finally, the decoder predicts deviation in pair-wise distances to update the coordinates of the input conformation. Claims And Evidence: - The first claim of this work is efficiency. "*Classic energy-based simulation is time-consuming when solving this problem while existing learning-based methods have advantages in computational efficiency but sacrifice accuracy and interpretability. In this work, we propose a novel and effective method to bridge the energy-based simulation and the learning-based strategy.*" This paper identifies WGFormer as a method that is faster than energy-based methods and more accurate than other learning-based methods. However, only accuracy is demonstrated and no discussion about efficiency and especially comparison to quantum-based methods has been provided. - Another claim of this work is interpretability. This paper describes the forward pass of the encoder as a process of minimization of some energy function, and suddenly it comes to a conclusion that the latent energy function of this process "*is an interpretable energy function for conformation optimization*". This makes no sense as this paper didn't reveal at all how exactly the gradient flow process relates to the physical process of conformation optimization. It seems to me that the conclusion of interpretability is drawn only because both the W.G.flow and the conformation optimization are some kind of energy minimization processes. - Further, in the Related Work section, the paper states that "*the empirical architectures of the models prevent them from minimizing a physically meaningful energy function, resulting in limited model interpretability and sub-optimal performance*" which is not true. For example, DSMBind [1], an application study of a conformation optimization model, shows that the model's output correlates with physical quantities (binding affinity), which gives interpretability. [1] Jin, Wengong, et al. "Unsupervised protein-ligand binding energy prediction via neural euler's rotation equation." Advances in Neural Information Processing Systems 36 (2023): 33514-33528. Methods And Evaluation Criteria: There is no major issue with the design of the network. The motivation of getting inspiration from Sinkformer and the three improvement techniques is unclear, though supported by ablation studies. Theoretical Claims: The main theoretical claim is the relationship between the WG flow and the conformational energy minimization process, which is not grounded. Experimental Designs Or Analyses: There is no major issue with experimental designs. Supplementary Material: No remarks. Relation To Broader Scientific Literature: The work focuses on small molecule conformation optimization. There have been a lot of papers in this area. This paper is not very special and does not make unique contribution. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: For your comments, below we answer them one by one. **W1: No discussion about efficiency** **A1:** Firstly, the inefficiency of energy-based/quantum-based methods (e.g., molecular dynamics simulation and density functional theory calculation) has already been a well-established consensus (also has been discussed in lines 12-20 of our paper) in this field [1][2][3], which is why numerous learning-based methods are proposed to approximate their accuracy. Secondly, as shown in Figure 4 of our paper, we have demonstrated that our WGFormer is significantly faster than existing learning-based methods, reducing the runtime per molecule by over 50%. **W2&W5: Lack interpretability and the relationship with the conformational energy minimization process** **A2&W5:** As we have mentioned in lines 30-35, our WGFormer is a Wasserstein gradient flow-driven SE(3)-Transformer architecture. It optimizes molecular conformations by **minimizing an energy function (i.e., Eq.17) defined on the latent mixture models of atoms.** The corresponding proofs, including the relationship between our energy function and conformation optimization (lines 220-242), have been provided in Section 4. To further verify our claim, we have conducted analytic experiments in our response to Reviewer dW7c (https://openreview.net/forum?id=2wUQttiab3&noteId=bSAlOuwM9f). These experiments demonstrate that: 1) Our WGFormer is indeed minimizing the energy function defined on the latent mixture models of atoms; 2) Minimizing this energy function indeed helps optimize the physically-meaningful energy of molecular conformation, and these two kinds of energy are highly correlated; 3) Minimizing this energy function is indeed closely relevant to improving the final metrics. **W3: Clarification of relevant statements** **A3:** The interpretability we emphasize here refers to the fact that the feedforward computation of existing model architectures (e.g., those baselines) cannot effectively correspond to the process of molecular conformation optimization. In contrast, WGFormer optimizes molecular conformations by minimizing an energy function defined on the latent mixture models of atoms, whose feedforward computation is the Euler step solving a continuity equation. The DSMBind you mentioned works to predict protein-ligand binding energy, which is unrelated to our work in either problem setting or technical route. **W4: Unclear motivation** **A4:** The motivation of our model design is clear --- as shown in lines 66-85, we aim to build a new SE(3)-equivalent model with interpretable architecture for obtaining molecular ground-state conformations from their low-quality conformations. To achieve this aim, we formulate the task as a conformation energy optimization problem, so that the feedforward computation of the model needs to be the Wasserstein gradient flow of the contunity equation in Eq.13. All our architectural improvements, including modifying QKV mechanism and applying Sinkhorn-based attention (as shown in lines 206-219), serve for this aim. **W6: A lot of same work and lack contribution** **A6:** We respectfully disagree with this viewpoint. In fact, all other reviewers have widely recognized the contribution of our work, including interesting and important relations with energy-based methods (Reviewer dW7c), the first attempt to predict molecular ground-state conformation through the lens of Wasserstein gradient flow (Reviewer NScR), and the fact that performance improvements are sound and lean in a promising direction in terms of architectural exploration (Reviewer FN8S). **In the aspect of model architecture,** we propose a first Wasserstein gradient flow-driven SE(3)-Transformer for molecular modeling. It is the first SE(3)-equivariant model that corresponds to a Wasserstein gradient flow in the latent space of atoms, which not only enhances the model interpretability but also improves computational efficiency. **In the aspect of theory,** we build the connection between the model architecture and the Wasserstein gradient flow and provide the conditions for the connection. In addition, as shown in Eq.17, we analyze the latent energy function in depth and explain its rationality from the perspective of entropic OT. **In the aspect of application,** currently, few attempts are made to predict molecular ground-state conformations. Our work provides a strong and interpretable solution to this challenging and important problem. In summary, from any perspective, we believe our work deserves a higher score. We respectfully ask you for re-evaluating our work in light of the responses above and the comments of the other reviewers. [1] Learning gradient fields for molecular conformation generation, ICML 2021. [2] Learning neural generative dynamics for molecular conformation generation. arXiv preprint 2021. [3] Energy-annotated molecular conformations for property prediction and molecular generation. Scientific Data, 2022.
Summary: This paper proposes a transformer architecture for optimizing molecular geometries. The network first processes node features and edge features (encoding the input geometry) via residual update blocks with a bespoke attention mechanism called WGFormer. An interpretation of the WGFormer is provided as a Wasserstein gradient flow in latent space. These features are then used to update the input 3D geometry using relative position vectors. The model is shown to outperform existing methods on this task in terms of MAE and RMSD metrics. Claims And Evidence: The main claim of improved model performance is supported. The results on QM9 are to my knowledge SOTA, but it would be nice to see results on larger and more diverse datasets, e.g., DRUGS. The claim that the WGFormer module, and its interpretation as Wasserstein gradient flow, is responsible for this improved performance is significantly less convincing. * The attention based module with explicitly updated dense pair representations bears some similarity to protein structure prediction networks; it is perhaps not so surprising that such an architecture would by itself already outperform existing methods for predicting small molecule structures. Indeed, the paper's ablation results show that even without the Sinkhorn module, the model is already SOTA. * Although the Sinkhorn module helps, the interpretation as Wasserstein gradient flow is unconvincing. First of all, the interpretation only holds in the limit of infinitesimal updates, when residual blocks are reduced to a neural ODE. Second of all, the energy functional depends on $k$ and is therefore changing for every layer. * The analysis in Figure 5 merely shows that adding layers to the network helps performance, which is independent of the Wasserstein gradient flow interpretation. It would be more convincing if the authors could show that the behavior of non-WGFormers are different. * I should stress that an interpretation of the architecture improvement is not necessary to be a strong paper, but if the authors choose to feature it prominently, then it should be strongly supported. Methods And Evaluation Criteria: Yes, the benchmarks are standard. However, larger molecules (DRUGS) have been used in previous works, but are missing here. Also, it would be helpful to show chemical property metrics that are of interest to those computing properties from minimized conformers. Theoretical Claims: I did not carefully check the proofs for theoretical claims. Experimental Designs Or Analyses: Please see comments in "Claims and Evidence." Supplementary Material: I did not carefully review the supplementary material, which contains mostly proofs. Relation To Broader Scientific Literature: This paper contributes to the ML literature on learning small molecule geometry optimizers. Although a coherent body of work at ML venues, the broader impact in actual scientific software is not yet clear. In this respect, this submission probably does move the needle much as the improvements are on the same metrics and are incremental (i.e., it is not clear where the threshold for wider applicability is, so it is not clear if it has been reached). The paper seems to draw heavy inspiration from Sinkformer, with which I am less familiar. From the point of view of other areas of AI4Science though, the proposed architecture, with its dense pair features used to bias attention, resembles architectures commonly used for proteins. It could be quite interesting to explore further architectural developments along these lines, even if they do not admit clean theoretical interpretations. Essential References Not Discussed: I am not aware of any essential references not discussed. Other Strengths And Weaknesses: No additional comments Other Comments Or Suggestions: Despite the concerns about the interpretation of WGFormer, I lean towards accept because the performance improvements are sound and lean in the right direction in terms of architectural exploration. Questions For Authors: No additional questions Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your positive feedback and constructive comments. Below, we resolve your concerns one by one. **W1: Test on larger datasets.** **A1:** As shown in Table 1 of our paper, in addition to QM9, our WGFormer also achieves SOTA performance on the Molecule3D dataset, which contains about four million molecules and is larger than DRUGS. In addition, each molecule in DRUGS has multiple conformations, and we don't know which one is the ground-state conformation (even don't know whether the ground-state conformation is among them). So, this dataset is not very suitable for our task. Nevertheless, we follow your suggestion, adding an experiment on DRUGS. We select the conformation with the highest Boltzmann weights as its ground-state conformation, then train and test different models. The results below demonstrate WGFormer's superiority in accuracy and efficiency on DRUGS. |Method|D-MAE|D-RMSE|C-RMSD|Inference Time (s/mol)| |-|-|-|-|-| |GTMGC|0.914|1.420|1.657|1.88| |ConfOpt|0.825|**1.272**|1.650|48.14| |WGFormer|**0.816**|1.281|**1.602**|**0.78**| **W2: The novelty and significance of proposed model architecture** **A2:** Although attention-based models have been widely used in molecule and protein generation, our WGFormer is still valuable because of the following reasons: Firstly, as shown in Figure 2 and lines 206-214, WGFormer introduces new QKV architecture, leading to a new cross-head interaction mechanism with fewer model parameters. Secondly, as shown in Appendix A, WGFormer applies the Sinkhorn-based attention module while maintaining the SE(3)-equivariance property, which enhances the interpretability of the model and makes it suitable for 3D molecular modeling. Combining these two improvements jointly, WGFormer can be interpreted as Wasserstein gradient flow for the latent mixture model of atoms. No matter whether the original SE(3)-Transformer works well or not, our architectural improvements have led to better performance with fewer trainable parameters and better interpretability, demonstrating the rationality of our design. **W3: 1) Can finite Sinkhorn iterations be interpreted as Wasserstein gradient flow? 2) The energy functional depends on $k$ and changes for every layer.** **A3:** **Firstly, using finite Sinkhorn iterations is not strong evidence that the model cannot be interpreted as Wasserstein gradient flow.** The gap between theoretical analysis and practical implementation is natural --- for convex optimization, we stop an algorithm in finite iterations, but it does not mean its analysis based on infinite series is meaningless. In practice, the Sinkhorn algorithm converges very fast to the optimum. The Sinkformer in [2] merely applies 3-5 Sinkhorn iterations. Our WGFormer follows the same setting. **Secondly, $k$ changes do not mean the energy functional changes for every layer.** In our paper, the energy functional is shown in Eq.(17), which is formulated as an entropic OT problem. Each WGFormer layer optimizes the same energy functional, leading to the Wasserstein gradient flow (lines 220-251, left column). **W4&W5: Interpretations of architecture improvement and Figure 5** **A4&A5:** Our WGFormer can optimize molecular conformations by minimizing an energy function defined on the latent mixture models of atoms. To further verify our claim, we have conducted additional experiments in our response to Reviewer dW7c (https://openreview.net/forum?id=2wUQttiab3&noteId=bSAlOuwM9f). These experiments demonstrate that: 1) WGFormer indeed minimizes a latent energy function for atoms' probability measure; 2) Minimizing this latent energy helps optimize the physical energy of conformation, highly correlated with the final metrics. For Figure 5, as shown in lines 426-439, we first train a WGFormer with $L=30$ layers and test it using different layer numbers ($L=1,...,30$). **The performance is improved as the number of layers increases, indicating that each layer is an Euler step to minimize the energy function in Eq.17.** When applying a non-WGFormer architecture with $L$ layers, the performance may not be improved consistently when increasing the number of layers, as shown in this anonymous link (https://anonymous.4open.science/r/WGFormer-comparison/Non-WGFormer.pdf). **W6: Chemical property metrics** **A6:** Following [1], we compare predicted and real ground-state conformations on their energy (in kcal/mol), dipole moment (in debye), and HOMO-LUMO gap (in kcal/mol). The MAEs of WGFormer and typical baselines are shown below, further demonstrating WGFormer's superiority. |Method|Energy|Dipole Moment|HOMO-LUMO Gap| |-|-|-|-| |GTMGC|0.008| 0.014|0.068| |ConfOpt|0.006|0.012|0.069| |WGFormer|**0.004**|**0.009**|**0.053**| Hope the above responses help enhance your confidence to further support our work. [1] Torsional diffusion for molecular conformer generation, NeurIPS 2022. [2] Sinkformer: Transformers with doubly stochastic attention, AISTATS 2022. --- Rebuttal Comment 1.1: Comment: "Secondly, changes do not mean the energy functional changes for every layer." What is $A$ in Eq 17? If $A=WW^T$ as previously defined then this changes for every layer with weights $W$. "The performance is improved as the number of layers increases, indicating that each layer is an Euler step to minimize the energy function in Eq.17." Could the authors please clarify exactly how this experiment is carried out? There are many possible interpretations, some of which are consistent with the authors' claims, and some less so. "Firstly, using finite Sinkhorn iterations is not strong evidence that the model cannot be interpreted as Wasserstein gradient flow. The gap between theoretical analysis and practical implementation is natural --- for convex optimization, we stop an algorithm in finite iterations, but it does not mean its analysis based on infinite series is meaningless" I believe the authors here are conflating the difference between truncation and discretization. My criticism holds analogously to the interpretation of residual updates as neural ODEs --- the discretization introduces qualitatively different behavior. For example, neural ODEs are always invertible whereas residual networks are nearly never so. --- Reply to Comment 1.1.1: Comment: Thanks for your feedback. Below, we try to resolve your remaining concerns one by one. **Q1: The energy function changes with respect to the weight $W$ in each layer.** **A1.** Sorry for misunderstanding your question. In our original rebuttal, we mean that given $A$, the function $k$ and the corresponding energy function are defined accordingly, no matter what kind of algorithm is applied to optimize the energy. We indeed learn different $W$'s for different layers in our experiment. However, this setting follows Sinkformer, which tries to interpret Transformer (not SE(3)-Transformer) from the perspective of Wasserstein gradient flow. Moreover, even if the energy function changes for different layers, each WGFormer layer can still be interpreted as one-step updating of the energy associated with the current layer. Nevertheless, we add the following experiment to resolve your concern. **Besides training WGFormer with 30 different layers, we train another WGFormer, which contains only 6 different layers and repeats each layer 5 times. In such case, the repeating of each layer can be interpreted as optimizing the same energy functional with 5 Euler steps.** Due to the limited rebuttal time, we train the model on QM9 and compare it with the baselines and the 30-layer WGFormer. The results below show that the WGFormer using repeated layers leads to comparable performance. ||D-MAE|D-RMSE|C-RMSD| |-|-|-|-| |GTMGC|0.264|0.470|0.367| |ConfOpt|0.244|0.438|0.246| |WGFormer (30 Layers)|0.227|0.422|0.206| |WGFormer (6 Repeated Layers)|0.231|0.422|0.219| In the revised paper, we will add this result and try WGFormer with fewer repeated layers. **Q2: Further explain the experiment obtaining Figure 5** **A2:** As we have mentioned in lines 426-439 of the paper and the response to Reviewer dW7c (https://openreview.net/forum?id=2wUQttiab3&noteId=bSAlOuwM9f), we first train a WGFormer with 30 layers through the defined loss function (i.e., Eq. 21), then we fix the model and use it to infer ground-state conformations in the testing set. During the inference, we obtain the interatomic relational representation $\mathbf{R}^{(l)}$ of each layer ($l$=1,...,30), and pass each of them through the trained decoder to obtain the molecular conformation (i.e., Eq.3) corresponding to each layer. For these molecular conformations obtained through different layers, we can further use the evaluation metrics (i.e., D-MAE, D-RMSE and C-RMSD) to measure how are these conformations close to the ground-state conformation. As shown in Figure 5, the metrics are improved as the number of layers increases. Besides, when we conduct this experiment on a non-WGFormer architecture with 30 layers, the metrics are often not improved consistently when increasing the number of layers, as shown in https://anonymous.4open.science/r/WGFormer-comparison/Non-WGFormer.pdf. Moreover, as shown in the response to Reviewer dW7c (https://openreview.net/forum?id=2wUQttiab3&noteId=bSAlOuwM9f), the reduction of the physical energy is highly correlated with the reduction of the latent energy in Eq.17 --- the Pearson correlation between them is larger than 0.9. In summary, this result demonstrates that WGFormer can be interpreted as the Euler step minimizing the latent energy defined in Eq.17, and accordingly, leads to the ground-state conformation optimization. **Q3: The rationality of interpreting the discretized feedforward steps as Wasserstein gradient flow.** **A3:** Thanks for further clarifying your concern. As shown in Section 4.1 and the above responses, we interpret each WGFormer layer as an Euler step (i.e., Eq.14) for solving the continuity equation in Eq.13. The equation describes the evolution of the latent mixture model of atoms in the time interval [0, 1] and the number of WGFormer layers corresponds to the number of Euler steps. We agree that the discretization may have different behaviors compared to its continuous counterpart. However, it should be noted that: 1) In practice, Euler method, although is discrete, is one of the most commonly used method to solve differential equations. 2) In theory, the sections 4.3 and 4.4 in the reference [1] (we cited in the paper) show that the time-discretized probability measure evolution (in the JKO scheme shown in Eq.(4.10)) converges to the Wasserstein gradient flow when the time step limits to infinitesimal (the content from Eq.(4.17) to Eq.(4.18)). In other words, the rationality of the discretized approximation is guaranteed in theory. Therefore, from the perspectives of theory and practice, interpreting WGFormer as an implementation of Wasserstein gradient flow, at least, has some rationality. **We hope the above responses can resolve your remaining concerns and enhance your confidence to further support our work in the decision phase.** [1] Santambrogio, Filippo. {Euclidean, metric, and Wasserstein} gradient flows: an overview. Bulletin of Mathematical Sciences 2017.
Summary: The paper introduces WGFormer, a novel model that combines the strengths of energy-based simulation and learning-based methods for predicting molecular ground-state conformations. WGFormer is built upon an SE(3)-Transformer framework and is driven by Wasserstein gradient flows. In an auto-encoding setup, the model encodes low-quality 3D conformations (e.g., generated via RDKit) and decodes them into ground-state conformations using a lightweight MLP. A key innovation is the customized attention mechanism based on the Sinkhorn-scaling algorithm with adjusted QKV matrices and the omission of the feed-forward network, which together reduce computational cost and promote cross-head feature fusion. The theoretical framework establishes that, under certain conditions, WGFormer operates as a discretized version of Wasserstein gradient flows that minimize a physically meaningful energy function defined on a latent mixture model of atoms. Extensive experiments on datasets such as Molecule3D and QM9 show that WGFormer not only outperforms current state-of-the-art baselines in both accuracy (e.g., lower C-RMSD values) and efficiency but also demonstrates robustness through comprehensive ablation studies. Claims And Evidence: Yes Methods And Evaluation Criteria: - The comparision between WGFormer and other generative model (e.g., GeoDiff) is needed. - There are many works optimize the low-quality molecule comformation to generate higher-quality molecules, such as AlphaFold3. It utilize the RDKit generated / CCD structures as reference features to predict the ligand structures. What are the difference between WGFormer and these methods? - It seems that Wasserstein gradient needs the computation of gradient. What is the consumption of WGFormer? Adding the comparison with other generative model is preferred. - What is the advantage of Wasserstein gradient flow compared with flow-matching based molecule generative model? Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: This method makes the first attempt to predict molecular ground-state conformation through the lens of Wasserstein gradient flow. Essential References Not Discussed: [1] Abramson, Josh, et al. "Accurate structure prediction of biomolecular interactions with AlphaFold 3." Nature 630.8016 (2024): 493-500. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your comments. Below, we resolve your concerns one by one. **Q1: Comparisons with other generative models** **A1:** Existing generative models generate multiple conformations by sampling. To make such models (e.g., GeoDiff [2] and TorsionDiff [3]) applicable for generating molecular ground-state conformations, we follow the commonly-used protocol in [1], generating multiple conformations and only preserving the one with the lowest energy. Using the above strategy, we train and test GeoDiff and TorsionDiff on QM9. The table below shows that WGFormer outperforms these two models, achieving lower prediction errors and much higher inference speed. |Method|D-MAE|D-RMSE|C-RMSD|Inference Time (s/mol)| |-|-|-|-|-| |GeoDiff|0.278|0.563|0.473|18.06| |TorsionDiff|0.378|0.685|0.437|14.36| |WGFormer|**0.227**|**0.422**|**0.206**|**0.15**| **Q2: The differences with other methods** **A2:** The differences between our WGFormer and the methods you mentioned can be summarized as follows: **1) Interpretable architecture:** As shown in lines 36-49, existing conformation optimization methods (e.g., ConfOpt-Two/Three Atoms) merely apply neural networks to approximate the gradients of atoms' motions when optimizing a molecule's conformation. The feedforward computations of their models do not correspond to minimizing a meaningful energy function. In contrast, our WGFormer is a Wasserstein gradient flow-driven SE(3)-Transformer. Its feedforward computation exactly corresponds to the Wasserstein gradient flow of the latent mixture model of atoms (i.e., the evolution of atoms' probability measure in the latent space). Each WGFormer layer is an Euler step minimizing an energy function of the molecular conformation, thereby significantly improving performance and interpretability. **2) Focus on ground-state conformation:** High-quality conformation $\neq$ Ground-state conformation. The AlphaFold series predicts protein structures but cannot ensure the structures are in the ground state. Although AlphaFold3 can predict biological complexes, it does not ensure ground-state conformations, either. WGFormer predicts the ground-state conformation, which determines basic molecular properties (shown in lines 43-48). Therefore, the AlphaFold series is not correlated with our work. **Q3: Does Wasserstein gradient need to compute gradient? The comparison on computational consumption is required.** **A3:** Firstly, we would like to explain the key concepts clearly. As shown in lines 238-258, **Wasserstein gradient flow captures a probability measure's evolution in the Wasserstein space (e.g., the change of the latent mixture model of atoms over time) rather than the gradient of each atom.** The detailed differences between Wasserstein gradient flow and gradient flow can be found in [4], which has been cited in the paper. Secondly, **Wasserstein gradient flow is achieved by the model architecture itself rather than additional computation.** As shown in Section 4.1 and **A2**, the feedforward computation of WGFormer corresponds to the Euler step solving a continuity equation (i.e., Eq.10) and minimizes an energy function (i.e., Proposition 4.1). Thirdly, WGFormer improves SE(3)-Transformer's architecture without increasing the complexity. We can train it on a single 3090 GPU, and we have demonstrated its superiority on inference speed in Figure 4 of our paper and the Table in **A1**. **Q4: Comparison with flow-matching based methods?** **A4:** Our work, i.e., developing a Wasserstein gradient flow-based model, is different from the flow-matching learning strategy: **Theory:** Wasserstein gradient flow optimizes an energy function of probability measures in the Wasserstein space, while flow-matching aims to model the velocity field of particles in the sample space, which does not model the energy of particles or their distribution evolution explicitly. **Tech route:** Our work focuses on improving model architecture and making it interpretable as Wasserstein gradient flow. We do not change the model's learning paradigm. Flow-matching is a model-agnostic learning strategy for fitting the velocity field of data. **Implementation:** WGFormer is learned to predict the ground-state conformation, and its architecture ensures the feedforward computation leads to lower energy. Flow-matching methods focus on generating "valid" rather than energy-minimized conformations. In general, they cannot ensure the energy is minimized in the inference phase. Hope the above responses can resolve your concerns and help re-evaluate our work. [1] REBIND: Enhancing Ground-state Molecular Conformation Prediction via Force-Based Graph Rewiring, ICLR 2025. [2] Geodiff: A geometric diffusion model for molecular conformation generation, ICLR 2022. [3] Torsional diffusion for molecular conformer generation, NeurIPS 2022. [4] {Euclidean, metric, and Wasserstein} gradient flows: an overview. Bulletin of Mathematical Sciences, 2017. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. I've read the authors' rebuttal, which have resolved my problems. I've raised my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thanks for your valuable feedback! We are glad to hear that we have resolved your problems, and your generous decision to raise your score means a great deal to us. Your support will also help us continue our efforts in this field, and we would greatly appreciate it if you could continue to support our work in the following discussions. Thank you once more for your valuable contributions to enhancing our work. Best wishes, Authors
Summary: this work proposes "Wasserstein gradient flow-driven" transformer, to gradually refine a initial 3D conformation to its ground state. this refinement process is associated with minimizing an energy function, thus enhancing its interpretability, and probably explains its performance improvement. the method is validated with extensive experiments with several ablation studies. Claims And Evidence: yes for most claims. still wondering about one core claim, i.e., "minimization of the energy function". several comments: * could the author please visualize this process, e.g., plot the energy function value vs the layers, from input to output for illustration * any way to verify this "energy function defined on the latent mixture models of atoms" is closely relevant to the final metrics? Methods And Evaluation Criteria: looks good to me Theoretical Claims: I don't have enough expertise Experimental Designs Or Analyses: looks good to me Supplementary Material: N/A Relation To Broader Scientific Literature: the relation with energy-based methods seem to be interesting and important to me. Essential References Not Discussed: N/A Other Strengths And Weaknesses: see above Other Comments Or Suggestions: see above Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your appreciation of our work. Below, we resolve your concerns one by one. **Q1: Visualize the latent energy function values achieved through different numbers of layers.** **A1:** As we have mentioned in lines 30-35 of our paper, our WGFormer is a Wasserstein gradient flow-driven SE(3)-Transformer architecture. In particular, it can optimize molecular conformations by minimizing an energy function defined on the latent mixture models of atoms (i.e., Eq.17 in the paper, denoted as `latent energy` for short), and the corresponding proofs have been provided in Section 4 of our paper. To further verify our claim, **given a trained WGFormer with 30 layers**, we have conducted a series of analytic experiments in the inference phase. Firstly, following your insightful suggestion, we randomly sample some RDKit-based molecular conformations and pass them through the trained WGFormers. Given the output (i.e., $\mathbf{X}$ and $\mathbf{R}$) of each layer, we can calculate the latent energy per layer by solving Eq.17 (using the Sinkhorn-scaling algorithm). Then, we plot the curve of the latent energy varying with the number of layers. As demonstrated by the figure in this anonymous link (https://anonymous.4open.science/r/WGFormer-energy/Latent_Energy.pdf), the latent energy decreases gradually as the number of layers increases, which strongly validates that our WGFormer is indeed minimizing the energy function defined on the latent mixture models of atoms. **Q2: Verify that the latent energy is closely related to the physical energy and final metrics.** **A2:** Furthermore, we employ the widely used xTB tool [1] to calculate the physical energy values of the molecular conformations obtained through different layers (i.e., given the $\mathbf{X}$ and $\mathbf{R}$ obtained by each layer, we can pass them through the trained decoder and obtain the corresponding molecular conformation) and analyze its correlation with the latent energy values obtained by Eq.17. In particular, taking the physical and latent energy obtained in the first layer as the references, we record the relative changes of the two kinds of energy w.r.t. number of layers and their correlations in the table below: | The number of layers | 5 | 10 | 15 | 20 | 25 | 30 | |--------------------------------------|---------|---------|---------|---------|---------|---------| | Relative Energy Value Change (kcal/mol) | -9.135 | -18.199 | -19.955 | -34.814 | -45.204 | -52.378 | | Relative Latent Energy Value Change | -3.629 | -7.729 | -8.512 | -8.932 | -9.195 | -10.385 | | Pearson Correlation Coefficient (Energy VS Latent Energy) | 0.885 ± 0.033 | | Distance Correlation (Energy VS Latent Energy) | 0.906 ± 0.018 | Here, a strong linear correlation is indicated by the Pearson correlation coefficient (0.885±0.033), while the slightly higher distance correlation (0.906±0.018) suggests additional nonlinear dependencies. **These results further validate the interpretability of our WGFormer --- minimizing the latent energy defined on the atoms' mixture model helps optimize the physically-meaningful energy of molecular conformation.** Finally, **as shown in Figure 5 of our paper, all evaluation metrics (D-MAE, D-RMSE, and C-RMSD) are improved steadily as the number of layers increases, having validated that minimizing the latent energy value can effectively improve the final metrics.** In general, the above results effectively verify the rationality and interpretability of our WGFormer, further supporting our theoretical claims proposed in Section 4 from the experimental point of view. Thank you once again for your valuable and insightful review, which has made our work more complete and convincing. We will add the above analytic experiments to the revised paper. We hope our responses resolve your concerns and make you more confident in supporting our work. Reference: [1] Bannwarth C, Ehlert S, Grimme S. GFN2-xTB—An accurate and broadly parametrized self-consistent tight-binding quantum chemical method with multipole electrostatics and density-dependent dispersion contributions. Journal of chemical theory and computation, 2019, 15(3): 1652-1671.
null
null
null
null
null
null
L-Diffusion: Laplace Diffusion for Efficient Pathology Image Segmentation
Accept (poster)
Summary: The paper introduces L-Diffusion, a novel approach to pathology image segmentation by leveraging Laplace distributions and contrastive learning, achieving good performance, and demonstrating generalization capabilities. Claims And Evidence: Supported Claims: 1) $\textbf{Laplace Distributions Improve Segmentation.}$ Laplace distributions are more effective than Gaussian distributions for modeling pathology image components, especially for tail categories. The authors provide a detailed theoretical analysis comparing the gradients of Laplace and Gaussian distributions, showing that Laplace distributions have sharper gradients and better separation between categories. Empirical results, including visual comparisons of pixel value distributions (Figure 2) and quantitative improvements in segmentation metrics (Tables 1 and 2), further support this claim. 2) $\textbf{Contrastive Learning Enhances Component Separation.}$ The proposed pixel latent vector contrastive learning mechanism improves the separation of different tissue and cellular components. The ablation studies (Table 4) demonstrate that combining contrastive learning with Laplace distributions significantly improves segmentation performance. The visualization of pixel latent vector separation (Figure 7) also provides qualitative evidence of the effectiveness of contrastive learning. 3) $\textbf{Superior Performance on Pathology Image Segmentation.}$ L-Diffusion achieves state-of-the-art performance on multiple pathology image segmentation benchmarks. The paper presents extensive quantitative results (Tables 1 and 2) showing that L-Diffusion outperforms existing methods on datasets such as CRCD, PUMA, BCSS, and PanNuke. The improvements in metrics like DICE, MPA, mIoU, and FwIoU are substantial and well-documented. 4) $\textbf{Generalization to Remote Sensing Images.}$ L-Diffusion generalizes well to other large-scale image segmentation tasks, such as remote sensing. The authors provide quantitative results (Table 5) and qualitative visualizations (Figure 11) showing that L-Diffusion performs competitively on the Massachusetts-Building dataset, a remote sensing image segmentation task. Claims That Could Benefit from Additional Evidence: 1) $\textbf{Efficiency of L-Diffusion.}$ L-Diffusion is efficient and reduces the dependency on annotated data. While the paper mentions that L-Diffusion can achieve competitive performance with limited annotated data (Table 3), it does not provide a detailed analysis of the computational cost or training time compared to other methods. Diffusion models are generally computationally expensive, and it would be valuable to understand how L-Diffusion compares in terms of resource requirements. Additionally, a sensitivity analysis on the number of diffusion steps (T) and its impact on performance and computational cost would strengthen this claim. 2) $\textbf{Applicability to Other Medical Imaging Modalities.}$ L-Diffusion is broadly applicable to medical image segmentation tasks. The paper focuses on pathology and remote sensing images but does not explore the model's performance on other medical imaging modalities, such as MRI or CT. Extending the experiments to these domains would provide stronger evidence for the model's generalizability. 3) $\textbf{Robustness to Noisy or Imperfect Annotations.}$ L-Diffusion is robust and can handle long-tail distributions effectively. While the paper demonstrates strong performance on tail components, it does not explicitly test the model's robustness to noisy or imperfect annotations, which are common in medical imaging. Including experiments with noisy labels or partial annotations would further validate the model's robustness. 4) $\textbf{Ethical and Clinical Impact.}$ L-Diffusion provides a powerful tool for advancing tumor diagnosis and microenvironment analysis. The paper briefly mentions ethical approval for the datasets but does not discuss the broader ethical implications or clinical impact of using L-Diffusion in real-world medical settings. A more detailed discussion of potential risks, limitations, and guidelines for deployment would strengthen this claim. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria in the paper are well-suited for the problem of pathology image segmentation and the broader application of medical image analysis. Theoretical Claims: The theoretical claims and proofs in the paper are generally correct and well-supported by mathematical derivations. The authors effectively demonstrate why Laplace distributions are more suitable than Gaussian distributions for pathology image segmentation, and they provide a solid theoretical foundation for the proposed L-Diffusion model. Experimental Designs Or Analyses: The authors evaluate L-Diffusion on several well-established pathology image datasets, including CRCD (colorectal cancer), PUMA (melanoma), BCSS (breast cancer), and PanNuke (multi-class cellular segmentation). These datasets are representative of the challenges in pathology image segmentation, such as gigapixel resolution, multi-scale features, and long-tail distributions. The paper uses standard evaluation metrics for segmentation tasks, including DICE, MPA (Mean Pixel Accuracy), mIoU (Mean Intersection over Union), and FwIoU (Frequency Weighted IoU). These metrics are widely accepted in the medical imaging community and provide a balanced assessment of segmentation performance. The authors compare L-Diffusion with a variety of state-of-the-art methods, including U-Net++, Swin-UNet, DeepLabv3, and SAMPath. The paper does not provide a detailed analysis of the computational cost or training time compared to other methods. Diffusion models are generally computationally expensive, and it would be valuable to understand how L-Diffusion compares in terms of resource requirements. The paper does not discuss the sensitivity of L-Diffusion to key hyperparameters, such as the scale parameter (b) in the Laplace distribution. Supplementary Material: I reviewed all the supplementary materials Relation To Broader Scientific Literature: Medical image segmentation, particularly in pathology, has been extensively studied using deep learning models such as U-Net (Ronneberger et al., 2015), DeepLab (Chen et al., 2017), and more recently, Transformers (Atabansi et al., 2023). These models have shown success in segmenting tissues and cells in pathology images, but they often struggle with long-tail distributions and multi-scale features. L-Diffusion addresses these challenges by introducing Laplace distributions and contrastive learning to enhance the separation of different components, particularly tail categories. This builds on prior work by providing a novel approach to handling the inherent complexities of pathology images, such as gigapixel resolution and imbalanced tissue distributions. Diffusion models, such as Denoising Diffusion Probabilistic Models (DDPM) (Ho et al., 2020), have gained popularity for their ability to model complex data distributions. These models have been applied to various tasks, including image generation, denoising, and segmentation. However, most prior work in diffusion models uses Gaussian distributions for noise modeling. L-Diffusion introduces Laplace distributions as an alternative to Gaussian distributions in diffusion models. The authors argue that Laplace distributions provide sharper gradients and better separation between different categories, making them more suitable for pathology image segmentation. This is a novel contribution that extends the applicability of diffusion models to medical imaging tasks with long-tail distributions. Contrastive learning has emerged as a powerful technique in self-supervised learning, particularly in computer vision (Chen et al., 2020). It has been applied to various tasks, including image classification, object detection, and segmentation. In medical imaging, contrastive learning has been used to improve feature representations and reduce the dependency on annotated data. L-Diffusion incorporates pixel latent vector contrastive learning to enhance the separation of different tissue and cellular components. This builds on prior work by applying contrastive learning to the latent space of diffusion models, which is a novel approach. The authors demonstrate that contrastive learning significantly improves segmentation performance, particularly for tail components, by amplifying the distributional differences between different categories. Essential References Not Discussed: I think the Laplace noise was previously found to be better schedule for image generation. (See https://arxiv.org/abs/2407.03297 and https://arxiv.org/abs/2304.05907) Other Strengths And Weaknesses: see above Other Comments Or Suggestions: not found Questions For Authors: While the paper focuses on Laplace distributions, have the authors explored other distributions, such as Cauchy or Student's t, for modeling pathology image components? If so, how do these distributions compare to Laplace distributions in terms of segmentation performance? How does L-Diffusion compare to unsupervised or semi-supervised methods for pathology image segmentation, particularly in scenarios with limited annotated data? Have the authors explored unsupervised or semi-supervised variants of L-Diffusion? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Reply to Reviewer ezdp We sincerely appreciate your efforts in reviewing our paper and for your constructive feedback. We have organized your comments and provided our responses below, hoping they address your concerns. **[Question (Q)1] Efficiency of L-Diffusion** Answer (A)1: Thanks. L-Diffusion consists of two stages: Latent feature extraction with diffusion model and segmentation with ConvNeXT. The former is time-consuming. In the training stage, employing two A6000 GPU resources and a dataset of 50,000 patch samples, the L-Diffusion model requires about 26 hours—24 hours for training the diffusion model and 2 hours for refining the ConvNeXT segmenter. Meanwhile, SOTA SAMPath achieves convergence within 5 hours. Notably, L-Diffusion's two-stage design enhances ConvNeXT fine-tuning efficiency. In the inference stage, L-Diffusion and SAMPath take an average of 2.5 and 1.8 hours per image of size 90027 × 88341, respectively. For more results, see **A2 to reviewer ZaXb**. Future work will focus on accelerating L-Diffusion. **[Q2] Applicability to Other Medical Imaging Modalities** A2: Thanks. We have added the generalization experiment. Results show that L-Diffusion achieves 6.43% and 0.67%  improvements to SOTA methods on BraTS and RIM-ONE datasets, respectively. Please refer to **A1 to reviewer ZaXb** for detailed experiment results. We will supplement the final version with more analysis and visualizations. **[Q3] Robustness to Noisy or Partial Annotations** A3: Thanks. Table 3 of the original submission shows the experiment with partial annotations. The pathology usually contains space-noise (wrong boundary) and label-noise (wrong type). The noise robustness experiments (DICE score) are given as: |Type|5%|10%|20%|30%| |-|-|-|-|-| |space-noise|92.18|91.83|91.79|91.65| |label-noise|88.72|85.26|80.14|75.48| Our method is robust to space-noise due to only the inner high-confidence area pixel is adopted for contrastive learning. Similar to other methods, our method is also sensitive to label-noise, which can be solved by the anti-noise learning loss function. More results and analysis will be included in the final version. **[Q4] Ethical and Clinical Impact Discussion** A4: Thanks. We recognize the importance of addressing the ethical and clinical implications of deploying L-Diffusion in real-world medical applications. While we briefly mention ethical approvals for datasets, we will expand our discussion to cover potential risks, such as biases in dataset annotations, the need for regulatory validation before clinical use, and challenges in interpretability. **[Q5] Sensitivity of Key Hyperparameters** A5: Sorry for the confusion. The scale parameter (red box of Eqn. 15) is an adaptive parameter predicted by U-Net. Ablation studies on different distribution and diffusion steps are given in Table 4 and Figure 5. Moreover, an ablation study on τ of contrastive learning is given as follows: |τ|0.1|0.08|0.05|0.02| |-|-|-|-|-| |DICE|89.33|90.67|92.11|88.74| We will add it to the final version. **[Q6] (a) Relation to Prior Work ([1] Imp... [2] Diff...) & (b) Exploration of Alternative Distributions (Dis.)** A6: Thank you for your comments. **(a)**: *[1]* demonstrates that the Laplace Dis. has steeper peaks and heavier tails. This property makes it more advantageous when dealing with data with outliers or sparse noise. *[2]* compared the Gaussian Dis., t Dis., Uniform Dis., and so on tend to produce smooth samples for image generation tasks. Different from *[1,2]*, this paper proposes for the first time to achieve segmentation through sharp dis. alienation latent space and gives a detailed theoretical derivation. We will add the above papers to the related work and highlight our contributions. **(b)**: We add the comparative experiment as follows: |Dis.|DICE|Runtime| |-|-|-| |Cauchy|16.25|7325| |Student's T|83.17|20440| |Laplace|85.75|8882| In summary, Laplace Dis. maintains its superiority. This is because the Cauchy dis. has no expectations and variances, making gradient optimization less stable. Student's t Dis. has potential in accuracy, but its runtime is too long because of its complex gradient solution. We are committed to adding detailed data and analysis to the final manuscript. **[Q7] Exploration of Unsupervised or Semi-supervised** A7: We appreciate the reviewer's interest in the applicability of L-Diffusion to unsupervised and semi-supervised variants. Since L-Diffusion is based on contrast learning in the diffusion stage, this model supports a semi-supervised variant. We explored semi-supervised effects in Table 3. For the unsupervised variant, only the latent feature is extracted by the L-diffusion, and then the clustering algorithm is adopted K-Means for the unsupervised method. The performance on the RIM-ONE dataset is provided below: |Type|Full.|Semi.|Un.| |-|-|-|-| |DICE|96.12|88.91|60.92| More details and analysis will be given in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for the authors, I am raising my score to 3
Summary: This paper introduces L-Diffusion, an innovative framework designed to advance pathology image segmentation by utilizing Laplace distributions and contrastive learning. The primary contribution of the paper lies in its use of Laplace distributions to model distinct components within pathology images, which enhances distributional divergence and facilitates more precise and robust segmentation. The novel pixel latent vector contrastive learning mechanism further reduces reliance on annotated data, addressing the challenges associated with long-tail components. The approach significantly improves segmentation performance on tissue and cell datasets, showing substantial gains over existing methods. Claims And Evidence: Yes, the claims made in the submission are generally supported by clear and convincing evidence. The authors provide both theoretical analysis and extensive experimental evaluations to substantiate their claims about the effectiveness of the L-Diffusion framework for pathology image segmentation. - Theoretical Analysis: The paper clearly explains the rationale behind using Laplace distributions for component modeling, as opposed to Gaussian distributions. It presents mathematical derivations comparing the gradients of the two distributions, showing that Laplace distributions are more sensitive to noise, which is beneficial for pathology image segmentation tasks. - Experimental Evidence: The paper includes quantitative results demonstrating the improvements in segmentation performance across various tissue and cell datasets (CRCD, PUMA, BCSS, PanNuke) compared to several state-of-the-art models. The reported results are statistically significant, showing substantial performance gains (e.g., improvements in DICE, MPA, mIoU, and FwIoU metrics). The qualitative visualizations further support these findings, showing that L-Diffusion achieves better boundary segmentation and handles tail-class components more effectively. - Comparison to Existing Methods: The authors compare L-Diffusion with several mainstream segmentation models (e.g., U-Net++, DeepLab, Swin-UNet) and show that their model consistently outperforms these methods, especially in terms of segmenting components with lower proportions (long-tail distribution). - Ablation Studies: The paper conducts detailed ablation studies to show the importance of each component of L-Diffusion, particularly the integration of the Laplace distribution and contrastive learning, which provides additional evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are well-suited for pathology image segmentation. Using Laplace distributions to model distinct components effectively addresses multi-scale features and long-tail distributions in pathology images, enhancing segmentation precision, especially for rare components. The integration of contrastive learning further refines component differentiation while reducing reliance on annotated data. The use of diffusion steps to refine feature maps and latent vectors contributes to more accurate segmentation. The evaluation metrics (DICE, MPA, mIoU, and FWIoU) are standard and effective, assessing various aspects of segmentation performance, including accuracy, precision, and class imbalance. Qualitative visualizations complement these metrics by showing better boundary segmentation, particularly for tail-class components. The benchmark datasets, such as CRCD, PUMA, BCSS, and PanNuke, cover diverse pathology segmentation tasks, while the inclusion of a remote sensing dataset for generalization tests demonstrates the method's robustness across different domains. Overall, the proposed methods and evaluation criteria are appropriate and robust for addressing the challenges in pathology image segmentation. Theoretical Claims: Yes, I focused on the rationale behind applying Laplace distributions to pathology image segmentation, particularly in terms of the formulaic principles. This is thoroughly addressed in Section A, "Mathematical Derivations", in the appendix, and the visualization of the differences between the Laplace and Gaussian distributions is commendable. It provides convenience for readers when interpreting the paper. The formal proof and the visualizations in the paper demonstrate that the probability change range $\Delta y$ of the Laplace distribution is larger than that of the Gaussian distribution $\Delta y'$, which aligns well with the segmentation strategy for pathology images. Additionally, the introduction of contrastive learning, which brings similar class samples closer and pushes different class samples apart, is a sound approach both conceptually and methodologically. Experimental Designs Or Analyses: Yes, the experimental designs and analyses appear sound and provide strong evidence for the claims made in the paper. 1. Benchmark Datasets: The paper uses a variety of benchmark datasets (e.g., CRCD, PUMA, BCSS, and PanNuke) for testing the model. These datasets cover different aspects of pathology image segmentation, including tissue and cell segmentation, and represent diverse challenges such as multi-scale features and long-tail distributions. The choice of datasets is appropriate for testing the model's generalizability across multiple types of pathology image segmentation tasks. 2. Comparison with SOTAs: The paper compare L-Diffusion with a variety of SOTA methods (e.g., U-Net, DeepLabv3, Swin-UNet, FastFCN), which is a solid approach to demonstrate the advantages of their proposed method. The reported quantitative results show significant improvements across tissue and cell segmentation datasets, providing strong evidence of the method’s effectiveness. 3. Qualitative and Quantitative Results: The qualitative results (visualizations) demonstrate that L-Diffusion performs well in segmenting boundaries and handling tail-class components, which aligns with the claims made by the authors. The quantitative results (based on the metrics mentioned) show clear improvements over existing methods, further supporting the paper’s claims of better performance. 4. Ablation Studies: The paper conduct ablation studies to isolate the impact of key components in their model, such as the integration of Laplace distributions and contrastive learning. This is an essential and well-designed experiment, as it provides insights into which parts of the model contribute most to its success. The ablation results confirm that the combination of these two techniques is a major factor in the model's improved performance. Supplementary Material: Yes, I have read the entire supplementary materials section. In summary, the supplementary materials provide the theoretical proofs related to L-Diffusion (which I believe is the most important part). Upon review, I found that the proof section starts with well-established diffusion frameworks such as DDPM and gradually derives a Laplace-based pathology image segmentation method. The reasoning behind the proof seems correct to me. Additionally, I would like to praise the authors for providing segmentation results at various levels of L-Diffusion in the appendix, and extending the approach to remote sensing images, which further demonstrates the robustness of the method. Relation To Broader Scientific Literature: The contributions of this paper are built upon a broad foundation of existing research, spanning multiple areas such as pathology image segmentation, diffusion models, and contrastive learning. The practical application of L-Diffusion is positioned within the realm of pathology image segmentation, following the Diffusion + Pathology paradigm. Interestingly, the authors have innovatively approached the problem by utilizing component distributions across diffusion steps, which not only fine-tunes the decomposition of pathological semantic information but also enhances the richness of the data through the diffusion process. This represents a significant innovation in addressing pathology image segmentation challenges. Furthermore, the introduction of contrastive learning aligns with ideas from multimodal learning and other related fields. Overall, the L-Diffusion model presented in this paper is convincing in its relation to the broader scientific literature, adhering to sound research principles. The novel application of the Laplace diffusion process adds a substantial innovation within the field, making the approach a noteworthy contribution to the domain. Essential References Not Discussed: In my personal opinion, this paper provides a thorough introduction to the algorithms and ideas involved in the relevant work section and experimental implementation. The approach is innovative, and there are no instances of withholding key literature that would be crucial to the paper's significance. Other Strengths And Weaknesses: As mentioned earlier, this paper overall meets the standards for publication at ICML. I believe the authors' L-Diffusion approach also offers valuable insights for feature engineering implementation. Other Comments Or Suggestions: I personally suggest that the Related Work section be placed after the Introduction to help readers quickly engage with the core implementation ideas.At the same time, the specific meaning of the abbreviation is given in the dataset section of the appendix, so that the reader can better understand the processing effect of the segmentation model on different categories. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Reply to Reviewer 9Pf9 We sincerely appreciate your thorough and insightful review of our paper, L-Diffusion: Laplace-Based Diffusion Model for Pathology Image Segmentation. Your positive evaluation of our contributions, including the use of Laplace distributions, contrastive learning, and the overall experimental design, is highly encouraging. We address your comments and suggestions below: **[Question (Q)1] Reordering the Related Work Section** Thank you for your constructive comments. We acknowledge your suggestion to move the Related Work section after the Introduction to enhance readability. In the revised version, we will adjust the structure accordingly. **[Q2] Clarification of Dataset Abbreviations** Thank you for your valuable suggestions. We agree that explicitly defining dataset abbreviations in the appendix would enhance clarity. We will ensure that all dataset names and category labels are fully defined to improve readability for the audience.
Summary: This paper proposes a new diffusion-based method to tackle pathology image segmentation. The pathology image segmentation is a challenging task because of the large, gigapixel resolution, diverse scales, and imbalanced tissue distributions in these images. Traditional segmentation models like U-Net and DeepLab have shown promise but struggle with labor-intensive annotation and feature extraction for tail categories. To address these issues, the paper introduces the Laplace Diffusion Model (L-Diffusion), which leverages Laplace distributions instead of Gaussian to model distinct components within the images. L-Diffusion uses contrastive learning to enhance the differentiation between components while maintaining intra-component similarity, improving segmentation precision and robustness, particularly for tail components. The model reduces the reliance on annotated data by capturing the distributional characteristics of different components. Extensive experiments show that L-Diffusion outperforms existing methods in accuracy and robustness across various benchmarks, providing an innovative approach to pathology image segmentation. Claims And Evidence: Based on the challenges summarized in the submission: "current pathology image segmentation tasks grapple with labor-intensive annotation processes or limited accuracy in identifying tail samples", there are two main claims made by the submission: (1). Laplace distribution is advantageous for broadening distribution disparities based on the analysis in the submission. (2). Laplace diffuse model learns better component distribution for pathology image segmentation, due to it's long-tail nature. These two claims are well supported by the method theory and experiments. Methods And Evaluation Criteria: The proposed method has been evaluated in the context of pathology image segmentation on a series of benchmarking datasets: CRCD , PUMA, BCSS, against a series of state of the art baseline methods Theoretical Claims: 1. The proofs in Section 3.1 for computing the gradient of Laplace distribution is correct. 2. The methmetical dereivation in supplementary material A. is verified. Experimental Designs Or Analyses: There are 3 main experiments conducted for evaluating the effectiveness of the proposed method: (1) Quantitative evaluations agains baseline methods (2) Ablation study (3) Generalization on large-scale data. The quantitative evaluations are made comprehensively with many state of the art methods and various metrics. The proposed method seems to get a decent gain consistently on each of the benchmarking dataset. Supplementary Material: The entire supplementary material, especially part A for mathematical derivation of the equation and part C D, E for more results and visualizations. Relation To Broader Scientific Literature: The proposed Laplace model can be applied to a broader domains such as image segmentation e-commerce product data, which also shows a long-tail distribution. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: Strengths 1. The reasoning in Section 3.1 is sound in term of showing the comparison between Laplace distribution gradients and Guassian distribution gradients. 2. The experiments are comprehensive and the code is available anonymously. The experiment is compare a large number of baseline methods and the proposed method is outperforming all the previous methods consistently. Other Comments Or Suggestions: 1. The equations in the paper section 3 is not numbered, making it difficult for reference. Questions For Authors: The authors are suggested to address the concerns listed in the previous sections during rebuttal period. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Reply to Reviewer nPmf We appreciate the reviewers' detailed and insightful feedback on our work. Below, we address the key points raised in the review. **Response to Theoretical Claims** We are grateful for the acknowledgment that our theoretical derivations and mathematical proofs are correct. We appreciate the reviewers' verification of our gradient computations for the Laplace distribution in Section 3.1 and the mathematical derivations in Supplementary Material A. **Response to Experimental Design and Evaluation** We thank the reviewers for their recognition of the comprehensive nature of our experiments, including: - Quantitative evaluation against strong baseline methods. - Ablation studies that highlight the contributions of different components of our method. - Generalization experiments on large-scale datasets, demonstrating the robustness of our approach. The consistent performance improvements across benchmarking datasets (CRCD, PUMA, BCSS) further validate the effectiveness of L-Diffusion. **Response to Weaknesses and Suggestions** Thank you for your valuable suggestion. We acknowledge the reviewer's concern that the equations in Section 3 are not numbered, which makes it difficult for reference. We will add equation numbers in the final version to improve clarity and readability.
Summary: The paper introduces L-Diffusion, a novel Laplace Diffusion Model designed for efficient pathology image segmentation. Unlike traditional approaches relying on Gaussian distributions, L-Diffusion employs multiple Laplace distributions to better differentiate component features in pathology images. The model follows a diffusion process, generating a sequence of feature maps, and enhances pixel-wise vector representations using contrastive learning. This approach significantly improves segmentation performance, particularly for tail-class components, which are often difficult to identify due to the long-tail distribution in pathology images. Extensive experiments on six tissue and cell segmentation datasets show that L-Diffusion achieves substantial improvements over state-of-the-art models, with up to 7.16% higher DICE score for tissue segmentation and 20.09% improvement for cell segmentation. The paper also provides theoretical analysis supporting the advantages of using Laplace distributions, demonstrating their ability to enhance component differentiation and model efficiency. Claims And Evidence: The claims made in the paper are well-supported by both theoretical analysis and experimental results. The authors claim that using Laplace distributions instead of Gaussian distributions enhances feature decomposition and improves segmentation accuracy, particularly for tail-class components. This claim is backed by mathematical derivations and empirical comparisons of Gaussian vs. Laplace distribution differentiation, which show that the latter leads to greater separability of pixel-wise feature vectors. Methods And Evaluation Criteria: The proposed L-Diffusion method and evaluation criteria are well-suited for pathology image segmentation. The use of Laplace distributions enhances feature differentiation, and contrastive learning improves pixel-wise representation, addressing challenges in long-tail class segmentation. The model is evaluated on six benchmark datasets covering both tissue and cell segmentation, ensuring a comprehensive assessment. Metrics such as DICE, MPA, mIoU, and FwIoU are appropriate for measuring segmentation accuracy and robustness. Theoretical Claims: The paper provides theoretical justification for using Laplace distributions over Gaussian distributions, arguing that the steeper gradient of the Laplace distribution enhances feature differentiation. The derivations, including probability density functions, gradient calculations, and diffusion step equations, appear mathematically sound. The Laplace noise formulation and reverse diffusion process are derived systematically, and the transition from Gaussian to Laplace-based modeling is well-supported. While I did not rigorously verify every step, the overall framework aligns with established diffusion model theory. No obvious errors were found, but external validation would further confirm the proofs' correctness. Experimental Designs Or Analyses: The experimental design is generally sound and appropriate for pathology image segmentation. The model is tested on six diverse datasets, covering both tissue and cell segmentation, ensuring broad applicability. Metrics such as DICE, MPA, mIoU, and FwIoU are correctly chosen for evaluating segmentation performance. The comparison with state-of-the-art models is comprehensive, showing clear performance improvements. Ablation studies confirm the contributions of Laplace distributions and contrastive learning, and a generalization test on a remote sensing dataset suggests potential broader applicability. A minor limitation is the lack of evaluation on other medical imaging domains, which could further validate the model’s versatility. Supplementary Material: The supplementary material includes mathematical derivations, additional dataset details, and extended visualizations of segmentation results. The theoretical derivations justify the use of Laplace distributions in the diffusion process, appearing logically sound. The dataset details provide transparency about sample sizes and categories, supporting the validity of experiments. Additional visualizations illustrate segmentation performance and latent feature distributions, reinforcing the model’s improvements. While I reviewed these key sections, I did not verify every mathematical step in detail. No major issues were found, but external validation of derivations would strengthen confidence in the theoretical claims. Relation To Broader Scientific Literature: The paper builds on diffusion models and contrastive learning, extending them to pathology image segmentation. Traditional segmentation models, such as U-Net, DeepLab, and Transformers, have been widely used, but they struggle with long-tail class segmentation and multi-scale feature extraction. The introduction of Laplace distributions instead of Gaussian distributions aligns with prior work on improving feature separability in latent spaces. Contrastive learning, which has been effective in self-supervised learning, is adapted here to enhance pixel-wise feature differentiation. The paper also connects to recent efforts in medical image segmentation using diffusion models, such as MedSegDiff [1], but uniquely applies component-wise latent distribution modeling. [1] Wu, Junde, et al. MedSegDiff: Medical image segmentation with diffusion probabilistic model. Essential References Not Discussed: The paper provides a comprehensive discussion of related literature, covering key works in diffusion models, contrastive learning, and pathology image segmentation. It cites foundational methods such as U-Net, DeepLab, and Transformer-based models, as well as recent diffusion-based segmentation approaches like MedSegDiff. The discussion on Laplace distributions and their role in enhancing feature separability is well-supported by prior probabilistic modeling research. Additionally, the application of contrastive learning to pathology image segmentation is contextualized within the broader field of self-supervised learning. Overall, the references are sufficient, and no essential prior works appear to be missing. Other Strengths And Weaknesses: ### Strengths 1. The paper introduces a novel approach by replacing Gaussian distributions with Laplace distributions, improving feature separability for pathology image segmentation. 2. The use of pixel latent vector contrastive learning enhances segmentation accuracy, especially for tail-class components, addressing long-tail distribution challenges. 3. L-Diffusion outperforms state-of-the-art segmentation models on six benchmark datasets, demonstrating significant improvements in both tissue and cell segmentation. 4. The mathematical derivations provide a solid foundation for Laplace-based diffusion modeling, supporting the claimed improvements in feature differentiation. ### Weaknesses 1. While the model performs well on pathology images, how would it generalize to other medical imaging tasks such as radiology or ophthalmology? 2. The proposed diffusion process involves multiple steps; can the authors provide runtime comparisons with other state-of-the-art segmentation models to assess efficiency? 3. The model introduces multiple hyperparameters (e.g., diffusion steps, contrastive learning settings). How sensitive is the performance to these hyperparameters, and could the authors provide guidelines for optimal tuning? Other Comments Or Suggestions: The second sentence in the third paragraph of the Introduction seems a bit unclear. Questions For Authors: Please see the weaknesses part above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Reply to Reviewer ZaXb We sincerely appreciate the reviewer's valuable feedback and insightful comments on our paper. We are pleased that the reviewers recognize our contributions in introducing L-Diffusion, leveraging Laplace distributions for pathology image segmentation, and improving segmentation performance, particularly for tail-class components. Below, we address the key concerns and provide clarifications on generalization, efficiency, and hyperparameter sensitivity. **[Question (Q)1] How would it generalize to other medical imaging tasks such as radiology or ophthalmology?** Answer (A)1: Thanks to the reviewers for their interest in the wider applicability of L-Diffusion. While our work focuses on pathology image segmentation, the principles of Laplace-based diffusion modeling and pixel latent vector contrastive learning are broadly applicable. Many medical imaging tasks, such as radiology (CT, MRI) and ophthalmology (fundus images, OCT), share challenges like heterogeneous textures, fine-grained structures, and class imbalances, suggesting that L-Diffusion could generalize well to these domains. We have conducted preliminary experiments on a public brain tumor MRI dataset and the RIM-ONE glaucoma dataset. On the BraTS dataset, our DICE score is better than **SOTA (83.13) [1]**. On the RIM-ONE glaucoma dataset, our MPA score is better than **SOTA (95.45) [2]**. Our findings indicate that L-Diffusion retains its advantages in segmenting small, rare tumor components. We are committed to adding statistics and visualizations to the revised manuscript. | DATASET | DICE | MPA | mIoU | FwIoU | |-|-|-|-|-| | BraTS | 89.56 | 84.53 | 86.41 | 87.10 | | RIM-ONE | 96.12 | 95.58 | 94.39 | 95.27 | **[Q2] Can the authors provide runtime comparisons with other state-of-the-art segmentation models to assess efficiency?** A2: Thank you for the constructive suggestion. - Training phase: Since our method is based on stable diffusion backbone, core training runtime is only in the ConvNeXT classification header, which is more advantageous than other full-training models. - Inference phase: The main use scenario is gigapixel images, which means that for these images, the actual image processing time is much longer than the segmentation time. This means that we can process tasks and segmentation tasks in parallel to reduce the runtime disadvantages of the model. We provide a table to compare the efficiency of different models so that you can more intuitively realize the impact of runtime disadvantage on efficiency. | Method | Image Size | Processing Time | Segmentation Time | Total Time | |:-:|:-:|:-:|:-:|:-:| | DeepLabV3+ | 90027 × 88341 | 763.58s | 382.85s | 763.63s | | SAMPath | 90027 × 88341 | 763.58s | 6355.31s | 6355.31s | | L-Diffusion | 90027 × 88341 | 8880.96s | 4899.84s | 8881.60s | In summary, L-Diffusion has advantages over traditional models in the training phase and is about 30% slower than SOTA in the inference phase, which is acceptable at runtime. In future work, we will consider introducing a quantitative accelerated diffusion model and other means for optimization, and the above data and analysis will be supplemented to revise the manuscript. **[Q3] How sensitive is the performance to these hyperparameters, and could the authors provide guidelines for optimal tuning?** A3: We greatly appreciate the reviewer's insightful question regarding hyperparameter sensitivity and tuning. Hyperparameters mentioned in the original text (**Models and Parameters** in the Experiment): - Diffusion steps: Too few steps limit denoising capacity, while too many increase computational cost without substantial improvement. This is reflected in Fig. 5. (5~15) - Sampling number of contrastive learning: Similar to the diffusion step. (100) - Learning rate of Adam optimizer: Diffusion Training (0.00001), Segmentation Training (0.001) - Batch size: Diffusion Training (1), Segmentation Training (32) Unmentioned hyperparameters: - Temperature scaling of contrastive learning: In the case of avoiding gradient explosion, smaller temperature scales can help sharpen the distribution. (0.05-0.1) An ablation study on temperature scaling τ of contrastive learning is given as follows: |τ|0.1|0.08|0.05|0.02| |-|-|-|-|-| |DICE|89.33|90.67|92.11|88.74| We will include practical tuning guidelines in the revised manuscript. **[Q4] The second sentence in the third paragraph of the Introduction seems a bit unclear.** A4: Thank you very much for your correction. We have checked the original text and provided the revised version. Revised text: Prominent deep learning models, including U-Net (R~), DeepLab (L~), and Transformer (A~), have demonstrated superior performance across a spectrum of medical image segmentation tasks. **References** [1] https://arxiv.org/pdf/2403.09262 [2] https://arxiv.org/pdf/1903.02740
null
null
null
null
null
null
I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning in Diffusion Models
Accept (poster)
Summary: This paper presents ThinkDiff, a framework that can efficiently and effectively align VLMs with diffusion models. Specifically, the framework design is inspired by the findings that latest diffusion models either use LLMs (e.g., T5) or CLIP as text encoders to guide the output image/video generation, so the paper aligns the embeddings from VLM with the encoded text embeddings with diffusion text encoder. Experiment results on in-context reasoning dataset (CoBSAT) has shown the effectiveness of the proposed ThinkDiff-VLM and ThinkDiff-CLIP models. Claims And Evidence: Yes, the claims that ThinkDiff is able to do in-context reasoning is well-supported. Methods And Evaluation Criteria: Yes, the paper evaluate the in-context reasoning ability on CoBSAT dataset. Theoretical Claims: NA Experimental Designs Or Analyses: - The experiment for in-context reasoning ability on CoBSAT is sound. However, an important 2-step baseline that first uses the VLM to do in-context learning and output the answer in text format, then give this text as input to the diffusion text encoder is not included. This baseline is essential to prove that aligning VLM with diffusion model is better than first use VLM to do in-context reasoning then use diffusion to do generation. I would doubt the necessity of doing such VLM-Diffusion alignment as ThinkDiff if no justification on this point is provided. - It would make the paper stronger if more evaluation benchmarks on image generation are included, such as GenEval, DPG-Bench. Supplementary Material: Yes, I reviewed most parts of the supplementary material. Relation To Broader Scientific Literature: Prior works on diffusion models usually focus on generating high-fidelity images/videos, while not focusing too much on the reasoning ability. There are some recent concurrent works, such as LanDiff [1], start to explore the same direction of adding reasoning abilities to diffusion models. [1] Yin, Aoxiong, et al. "The Best of Both Worlds: Integrating Language Models and Diffusion Models for Video Generation." arXiv preprint arXiv:2503.04606 (2025). Essential References Not Discussed: The references looks complete to me. Other Strengths And Weaknesses: - Strength: the idea of aligning VLM with the text embeddings of diffusion text encoder is novel - Weakness: As discussed above, a major weakness of this paper is the lack of proof that why we need such alignment instead of just use VLM to do reasoning and use diffusion to do generation separately. If the authors cannot provide a convincing benefits, the necessity of such alignment could be doubtful. Other Comments Or Suggestions: NA Questions For Authors: NA Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Reviewer j31i** We thank Reviewer j31i for the insightful comments and suggestions. We provide our responses as follows and will add additional literature in the reference. >**Q1: Why alignment?** There are several advantages in terms of the alignment compared to text-based methods on the multimodal generation task. We elaborate on them below: 1. An important aspect of multimodal generation is to capture multimodal information in the generator. The aligner in our method is differentiable. It can convey detailed multimodal information that cannot be expressed by texts effectively. However, the text-based methods discard the detailed multimodal information in the middle. 2. The model is an end-to-end model. The submodels in it can be optimized together to benefit each other. 3. The features passed through the aligner are more representative and can convey more dense spatial context information than sparse texts. > **Q2: Text-based evaluation.** The CoBSAT benchmark mainly focuses on the correctness of logical reasoning. To better validate the benefits of the alignment that can convey more multimodal information, we evaluate the image-condition generation of ThinkDiff-LVLM on COCO, as shown in the table below. The input of ThinkDiff-LVLM is an image. The input of the text-based method is the texts generated by the LVLM. As shown in this table, the alignment performs better on the CLIP image metric (CLIP-I) and the FID score. This experiment further validates that with the proposed differentiable aligner and dense representations, more multimodal information can be passed to the diffusion decoder, which is important for multimodal generation tasks. | Metric | CLIP-I$\uparrow$ | FID$\downarrow$ | |-------------|------------|-----------| | Alignment | **0.744** | **65.8** | | Text-based | 0.728 | 66.3 | > **Q3: Evaluation benchmarks.** We evaluate on more datasets such as the COCO, GenEval and DPG-Bench. The GenEval: | Model | Emu | SEED-LLaMA | Ours | |-------------|-------|------------|-----------| | GenEval$\uparrow$ | 3.25 | 35.35 | **39.13** | The DPG-Bench: | Model | Emu | SEED-LLaMA | Ours | |-------------|-------|------------|-----------| | DPG-Bench$\uparrow$ | 12.4 | 47.3 | **54.8** | The COCO results are in Q2 and Reviewer BZ6o Q1. Compared to the previous arts, ThinkDiff-LVLM achieves notable gains in all benchmarks. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response! One last question about the response in Q3. It seems that the scores on GenEval and DPG-Bench are still much lower than other recent unified models (januspro, show-o, etc). As a reference, Fig1(b) in JanusPro shows the GenEval and DPG-Bench scores on SDXL, PixArt-Alpha, and SDv1.5. To at least show that the proposed alignment works better than text-based evaluation on the underlying diffusion model you used (which is FLUX), could the author provide a evaluation result with FLUX on these two benchmark datasets (similar as Fig1(b) in JanusPro)? Chen, Xiaokang, et al. "Janus-pro: Unified multimodal understanding and generation with data and model scaling." arXiv preprint arXiv:2501.17811 (2025). --- Reply to Comment 1.1.1: Comment: ## Reviewer j31i We sincerely thank Reviewer j31i for the insightful feedback and follow-up questions, and provide detailed responses to the new comments. >**Q4: General Remark** Our proposed ThinkDiff enables **multimodal reasoning ability** for pretrained image generation models through efficient alignment. Below, we clarify its advantages over FLUX and Janus Pro. **Compared to FLUX:** FLUX focuses on reconstructing text into images, whereas ThinkDiff performs multimodal reasoning generation. **Compared to Janus Pro:** (1) ThinkDiff is a general framework that enhances various pretrained generation models (e.g., text-to-image, text-to-video) with reasoning capabilities. (2) ThinkDiff achieves superior multimodal reasoning with fewer computational resources. Janus Pro requires 256 A100 GPUs for 14 days while ThinkDiff requires just 5 hours on 4 A100 GPUs. (3) ThinkDiff supports multimodal-to-image generation based on in-context reasoning. As shown in Janus Pro's paper, code, and GitHub issues (#144 in deepseek-ai/Janus), Janus Pro is limited to text-to-image generation and does not support multimodal image generation. > **Q5: Geneval and DPG Bench** While **Geneval and DPG Bench** are valuable for evaluating text-to-image diffusion models, they are not designed for **multimodal reasoning generation**, the core strength of ThinkDiff. They evaluate text-prompt fidelity but lack support for multimodal inputs or reasoning. In contrast, the **CoBSAT** is explicitly designed to assess **multimodal reasoning generation**. It can highlight ThinkDiff's novel contributions. >**Q6: Janus Pro on reasoning benchmark.** To evaluate Janus Pro on CoBSAT, we implemented a two-step workaround since it lacks multimodal-to-image generation capabilities. Janus Pro converted multimodal inputs into intermediate textual descriptions, which were then processed through its text-to-image pipeline. The results are summarized below: **Janus Pro on CoBSAT:** | | Color-I | Background-I | Style-I | Action-I | Texture-I | Color-II | Background-II | Style-II | Action-II | Texture-II | |-|-|-|-|-|-|-|-|-|-|-| | Janus | 0.403 | 0.234 | **0.378** | **0.462** | **0.338** | 0.313 | 0.319 | 0.283 | 0.549 | 0.264 | | Ours | **0.638** | **0.362** | 0.254 | 0.434 | 0.317 | **0.610** | **0.590** | **0.432** | **0.664** | **0.332** | **Key Observations:** (1) ThinkDiff outperforms Janus Pro in most tasks due to its alignment of powerful LVLMs and diffusion decoders, enabling superior multimodal reasoning and generation. (2) Janus Pro struggles to balance reasoning and high-quality image generation and lacks native multimodal-to-image capabilities. (3) While Janus Pro excels on text-centric benchmarks (Geneval, DPG Bench), these benchmarks do not evaluate **multimodal reasoning** or **multimodal-to-image generation**, which are ThinkDiff's key strengths. >**Q7: Evaluation of Flux and Flux Redux** We evaluated Flux on GenEval and DPG-Bench, including Janus Pro's results for reference: **GenEval Results:** | Emu | SEED-LLaMA | Janus Pro | Flux (Upperbound) | Ours | |-|-|-|-|-| | 3.25 | 35.35 | 80.0 | 65.13 | 39.13 | The results of Flux can be cross-validated by Table 2 in the literature: 1.58-bit FLUX, Chenglin Yang, et. al. **DPG-Bench Results:** | Emu | SEED-LLaMA | Janus Pro | Flux (Upperbound) | Ours | |-|-|-|-|-| | 12.4 | 47.3 | 84.2 | 82.6 | 54.8 | We also evaluated the open-source Flux Redux which supports image inputs on CoBSAT by organizing test cases into a single input image. Results are below: **Flux Redux on CoBSAT:** | | Color-I | Background-I | Style-I | Action-I | Texture-I | Color-II | Background-II | Style-II | Action-II | Texture-II | |-|-|-|-|-|-|-|-|-|-|-| | Flux Redux | 0.042 | 0.052 | 0.124 | 0.106 | 0.002 | 0.039 | 0.046 | 0.050 | 0.082 | 0.004 | | Ours | **0.638** | **0.362** | **0.254** | **0.434** | **0.317** | **0.61** | **0.59** | **0.432** | **0.664** | **0.332** | We can observe from these three tables that: (1) Flux performs well on text-to-image tasks, consistent with its reconstruction-based design. (2) FLUX sets the upper bound for ThinkDiff on Geneval and DPG Bench, as ThinkDiff builds on FLUX. (3) Unlike ThinkDiff, Flux Redux performs poorly on CoBSAT, confirming its lack of reasoning capabilities. Geneval and DPG Bench focus on detailed text-prompt reconstruction, which does not fully evaluate ThinkDiff's strengths. (4) ThinkDiff focuses on multimodal reasoning generation. For reconstruction-based tasks, ThinkDiff may slightly alter input text prompts to enhance the semantic richness, which can impact performance on text-centric benchmarks. However, the core strength of ThinkDiff lies in multimodal reasoning generation. For reconstruction-based text-to-image tasks, the original FLUX can still be used.
Summary: This paper introduces **ThinkDiff**, a novel alignment paradigm that enhances text-to-image diffusion models with **multimodal in-context reasoning** capabilities. Instead of traditional pixel-level reconstruction-based fine-tuning, the authors propose aligning vision-language models (VLMs) with the decoder of a **large language model (LLM)**, leveraging vision-language training as a **proxy task**. The key insight is that **LLM decoders and diffusion model decoders share the same input feature space**, allowing the transfer of reasoning capabilities without requiring complex multimodal reasoning datasets. Claims And Evidence: The claims made in the paper are generally well-supported by empirical evidence. The authors provide: - **Quantitative results** demonstrating large accuracy improvements on the CoBSAT benchmark compared to prior methods. - **Ablation studies** validating key components of ThinkDiff, such as the importance of aligning generated token features instead of input token features. - **Comparisons to existing approaches**, showing that ThinkDiff not only outperforms but also requires significantly fewer resources. However, while the results are strong, further validation on **diverse datasets and real-world applications** would reinforce the claims regarding generalizability. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem: - The **CoBSAT benchmark** is a well-suited dataset for evaluating multimodal reasoning in diffusion models. - The evaluation method follows a structured **2-shot and 4-shot reasoning approach**, ensuring fair comparisons. - **Ablation studies** confirm the contribution of the core components of ThinkDiff. However, the work mainly focuses on **image generation tasks**. It would be valuable to evaluate the method on **other multimodal reasoning tasks**, such as **captioning or VQA**, to assess broader applicability. Theoretical Claims: The paper does not focus heavily on theoretical derivations but instead introduces a **novel alignment framework**. The core theoretical claim is that aligning **VLMs with LLM decoders naturally aligns them with diffusion model decoders** due to their shared input feature space. This is a **reasonable assumption**, but additional theoretical validation or **formal analysis of feature space alignment** could further strengthen the claim. Experimental Designs Or Analyses: The experimental design is well-structured, with: - **Comparisons against multiple baselines**, including SEED-LLaMA, Emu, and GILL. - **Ablation studies** examining the effectiveness of masked training, LVLM-generated tokens, and RMSNorm initialization. - **Efficiency comparisons**, showing that ThinkDiff achieves better performance with significantly lower computational costs. However, the dataset used for training is relatively **small (~1.7M images from CC3M, CC12M, and SBU)**. Additional results on **larger and more diverse datasets** could provide stronger evidence of generalizability. Supplementary Material: The supplementary material provides: - **Additional qualitative results**, showcasing high-quality multimodal in-context reasoning. - **More comparisons to baselines**, confirming ThinkDiff’s advantages. - **Detailed ablations**, reinforcing the paper’s claims. The supplementary section is well-organized and enhances the main paper's findings. Relation To Broader Scientific Literature: This work builds on several key areas: - **Text-to-image diffusion models** (e.g., Stable Diffusion, Imagen). - **Multimodal large language models (LLMs)** (e.g., Flamingo, SEED-LLaMA). - **Vision-language models (VLMs)** (e.g., CLIP, BLIP-2). - **Multimodal reasoning benchmarks** (e.g., CoBSAT). ThinkDiff **extends prior work** by introducing a novel alignment paradigm that allows diffusion models to reason over multimodal inputs rather than simply reconstructing images. Essential References Not Discussed: The paper thoroughly cites relevant prior work but could benefit from discussing: - **Work on vision-language fine-tuning paradigms** (e.g., InstructBLIP, Kosmos-G). - **More literature on feature space alignment in deep learning** (e.g., feature alignment in contrastive learning). Explicitly comparing ThinkDiff’s **alignment approach** to **existing multimodal fusion techniques** could further strengthen the paper. Other Strengths And Weaknesses: #### **Strengths**: 1. **Novel multimodal alignment framework**, enabling in-context reasoning in diffusion models. 2. **Significant improvement over baselines**, achieving SOTA performance on CoBSAT. 3. **Computational efficiency**, requiring fewer GPUs and training hours than competing methods. 4. **Clear experimental design**, including thorough ablation studies and efficiency comparisons. #### **Weaknesses**: 1. **Limited evaluation on real-world datasets**—performance on **other multimodal tasks** like captioning or VQA is unclear. 2. **Theoretical analysis of feature space alignment is minimal**—a deeper mathematical justification could be beneficial. 3. **Only tested on a single benchmark (CoBSAT)**—generalizability to other multimodal datasets remains an open question. Other Comments Or Suggestions: - The **writing is clear and well-structured**, making it easy to follow the key ideas. - The **figures effectively illustrate the concept of multimodal in-context reasoning**. - Additional **comparison to multimodal alignment techniques** (e.g., how ThinkDiff differs from IP-Adapter) would be useful. Questions For Authors: 1. **Generalization beyond CoBSAT**: - How well do you expect ThinkDiff to generalize to **other multimodal tasks** beyond image generation (e.g., estimation, segment)? 2. **Scalability to other models**: - How does ThinkDiff perform with **other diffusion models** (e.g., SDXL, DeepFloyd IF)? - Does scaling the LVLM improve results, or does performance saturate? 3. **Feature space alignment analysis**: - Have you conducted any **quantitative analysis** on the feature space alignment between the **VLM, LLM decoder, and diffusion decoder**? - Could a **contrastive learning-based loss** further improve alignment? 4. **Ablation on different VLMs**: - Did you test **other VLMs** beyond Qwen2-VL and CLIP? - Would a **stronger LVLM (e.g., GPT-4V)** further improve results? 5. **Failure cases and limitations**: - Can you provide qualitative examples of **failure cases** where ThinkDiff struggles? - What are the **main sources of error**, and how could future work address them? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Reviewer fnpq** We thank reviewer fnpq for the insightful comments and suggestions. We address major concerns as follows and will add additional literature in the reference. >**Q1: Ablation on different VLMs.** We use Qwen2-VL-7B which supports interleaved image and text inputs as the LVLM. The table below further ablates ThinkDiff-LVLM's performance with different LVLMs. InternVL2.5-8B achieves a worse performance compared to Qwen2-VL-7B, indicating that a stronger LVLM can improve the alignment and accuracy. Moreover, with a more powerful LVLM Qwen2-VL-72B, ThinkDiff achieves a new SoTA on most tasks. | Model | Color-I | Background-I | Style-I | Action-I | Texture-I | Color-II | Background-II | Style-II | Action-II | Texture-II | |----|---|----|---|----|---|----|---|----|---|-----| | Internvl-2.5-8b | 0.326 | 0.108 | 0.104 | 0.261 | 0.111 | 0.278 | 0.308 | 0.163 | 0.495 | 0.137 | | Qwen2-VL-7b | 0.622 | 0.349 | 0.237 | **0.459**| 0.290 | **0.511**| 0.534 | 0.340 | 0.534 | 0.292 | | Qwen2-VL-72b | **0.656**| **0.363** | **0.359**| 0.361 | **0.375** | 0.458 | **0.617** | **0.411**| **0.538** | **0.338** | >**Q2: Data scale.** We double the sample number to 3.4M to include more diverse datasets. We train a new model for the same steps. The table below shows with more data, our model can generally improve the results in most tasks. | Data | Color-I | Background-I | Style-I | Action-I | Texture-I | Color-II | Background-II | Style-II | Action-II | Texture-II | |----|---|---|----|-----|---|---|---|---|----|----| | 1.7M | 0.622 | 0.349 | **0.237** | 0.459 | 0.29 | **0.511**| 0.534 | 0.340 | **0.534** | **0.292** | | 3.4M | **0.632**| **0.374** | 0.233 | **0.484**| **0.323** | 0.469 | **0.573** | **0.354**| 0.523 | 0.281 | >**Q3: Contrastive loss.** In **Q1** and **Q2** of Reviewer 37KH, we use the contrastive loss similar to the ImageBind. It has some disadvantages compared to our method and is inferior in accuracy. However, it still shows potential to effectively combine the explicit contrastive alignment and our implicit alignment in future work. >**Q4: Real-world application.** Our method not only can handle the tasks defined by the CoBSAT benchmark but also can handle different general tasks. To demonstrate this point, we further evaluate our model on general generation capabilities on other benchmarks, i.e., COCO, GenEval, and DPG-bench. Our method shows clearly better results compared to other competitors. Please refer to the COCO table in Reviewer BZ6o **Q1**, the GenEval, and the DPG-bench tables in Reviewer j31i **Q3**. >**Q5: Other multimodal fusion techniques.** The Flux-pro actually uses a fusion method similar to IP-Adapter which injects image features by attention. As shown in Paper Figures 6, 11, and 13 in the main paper, our method clearly shows advantages in coherently composing different multimodal instructions. >**Q6. Formal analysis of feature space alignment.** The alignment quality of ThinkDiff is implicitly evaluated by the accuracy and consistency of the reasoning benchmark and composing benchmark. Since our alignment is implicit, it is important but not very straightforward to directly analyze the feature space. To investigate the theoretical derivations of the alignment, we may rely on the development of vision-language training in VLMs. We humbly leave this for future work. >**Q7: Other multimodal reasoning tasks.** In this paper we mainly target multimodal generation tasks. The LVLM in our model supports captioning or VQA tasks but the whole method itself does not target them. However, our alignment is a general alignment method and can be possibly applied to other multimodal tasks. This is a very interesting direction for future research. >**Q8: Other diffusion models.** Once aligned, our ThinkDiff can be applied to other models that use T5 as encoder in our experiments. For example, beyond FLUX, we also applied ThinkDiff to CogvideoX in Appendix Figure 14, where a coherent video is generated by seamlessly integrating images and text. This demonstrates ThinkDiff's flexibility. Extending ThinkDiff to even more diffusion models is straightforward and is left as future work due to the tight schedule. >**Q9: Failure cases.** This anonymous link (https://anonymous.4open.science/r/anonymous-4DF1/failure_case.png) shows two failure cases on CoBSAT. We think the main sources are from two aspects. One is the imperfect of LVLM which gives wrong reasoning, such as the "cow in the desert" case. Using a more powerful VLM can address this problem. The other is the imperfect alignment that does not accurately condition diffusion models, such as the second "white apple" case. We think a possible way to effectively address this problem is to include high-quality datasets for end-to-end training.
Summary: This paper proposes "ThinkDiff", a novel method to incorporate VLMs in text-to-image generation pipelines with the goal to improve multimodal understanding and in-context reasoning capabilities. The key lies in aligning the VLM outputs with the diffusion decoder input space, which is done by using the corresponding LLM Decoder as a proxy signal for alignment. This results in a lightweight training algorithm and empirical results demonstrate a) significant improvements for in-context generation on CoBSAT and b) qualitative performance on compositional tasks. Claims And Evidence: Most of the claims in the paper are well-supported by their experiments. However, there is no quantitative evaluation for the quality of generated images as well as analysis of inference compute, which needs to be addressed (see Methods And Evaluation Criteria). Methods And Evaluation Criteria: Certain experiments and benchmarks are missing from the current manuscript. 1. Despite strong results on multimodal in-context generation, the paper lacks in terms of quantitative evaluation outside of the in-context generation settings. For instance, quantitative evaluation of quality of image generation (ex. FID/CLIP scores on zero-shot generation on COCO or subject driven generation on DreamBench) as well as a comparison against contemporaries is missing. However, these are important to ascertain if there are any potential drawbacks arising from the proposed framework and its focus on reasoning. 2. While low compute requirements for training is desirable, inference cost is arguably a bigger decision factor for model adoption. Therefore, the paper needs to include an analysis for the inference costs (especially due to the additional multimodal processing in VLMs) to complement their training cost comparisons. 3. The paper proposes two models - ThinkDiffLVLM and ThinkDiffCLIP. However, the first is only evaluated on in-context settings while the other only on compositional settings without any specific reasoning for the choice. Evaluation of both methods on both settings is needed to understand their respective strengths and weaknesses. Nevertheless, the framework is simple and effective, and my overall opinion of the paper is generally positive. I believe that addressing these concerns will serve to strengthen the manuscript. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is sound but not comprehensive enough (see Methods And Evaluation Criteria). Supplementary Material: N/A Relation To Broader Scientific Literature: The method presents a novel approach to incorporate a VLM's multimodal reasoning ability for in-context image generation with the added advantage of a lightweight training pipeline. The resulting model achieves significant gains on the CoBSAT dataset, compared to existing approaches. With some additional empirical validation, it can be a good contribution for the research community. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is well structured and easy to follow, while the motivation and approach seem sound. Other Comments Or Suggestions: N/A Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Reviewer BZ6o** We thank Reviewer BZ6o for the insightful comments and suggestions. We provide our responses below. >**Q1: Generation quality.** We evaluate the general image-conditioned generation of ThinkDiff-LVLM on 1k images in COCO. The models are conditioned by an image in the experiment. We show the FID, the CLIP image metric (CLIP-I), and the CLIPScore (CLIP-T) in the table below. Our method can achieve much better performance than existing competitors such as SEED-LLaMA, Emu, and GILL. | Model | CLIP-I$\uparrow$ | CLIP-T$\uparrow$ | FID$\downarrow$ | |------------------|------------|------------|------------| | SEED-LLaMA | 0.695 | 0.546 | 71.7 | | Emu | 0.443 | 0.260 | 554.2 | | GILL | 0.418 | 0.227 | 274.5 | | ThinkDiff-LVLM | **0.744** | **0.590** | **65.8** | >**Q2: Inference time.** ThinkDiff-LVLM replaces the T5 encoder in Flux with an LVLM. The LVLM model is highly optimized in the community framework such as vLLM and adds minimal latency to the inference. Qwen2-VL-7B model takes less than 0.2 seconds for one prompt that consists of both images and texts. Although the T5 encoder (typically costs 0.05 seconds for one prompt) is faster, they both are a marginal overhead compared to the diffusion process of Flux (typically over 2.5 seconds). >**Q3: ThinkDiff-LVLM vs ThinkDiff-CLIP and their evaluation.** ThinkDiff-LVLM can handle different tasks including general image-conditioned generation, similar to ThinkDiff-CLIP. As shown in **Q1**, we evaluate ThinkDiff-LVLM on the image-conditioned generation task. Our method obtains better CLIP-I, CLIP-T, and FID scores compared to SEED-LLaMA, Emu, and GILL. This shows the general applicability of ThinkDiff-LVLM beyond the reasoning. On the contrary, ThinkDiff-CLIP is limited by the CLIP image encoder and the T5 encoder. Therefore, in our experiments in the main paper, we already observed that ThinkDiff-CLIP cannot handle very complex logical questions as ThinkDiff-LVLM does, but it has strong capabilities for multimodal composing. The above additional evaluation further shows their respective strengths and weaknesses.
Summary: The paper enables diffusion models to perform in-context reasoning across images and text, rather than just reconstructing pixel information. The paper shows two variants --- LVLMs and CLIP. The images generated are of good quality and obtain state of the art performance on various measures. Claims And Evidence: Yes, the claims are supported by the evidence. Methods And Evaluation Criteria: Yes, the proposed method correctly reflects the application at hand. Theoretical Claims: There are no theoretical claims in the paper. Experimental Designs Or Analyses: The experiments look correct to me. Supplementary Material: Yes. Relation To Broader Scientific Literature: The paper is related to the broader scientific literature. Essential References Not Discussed: NA Other Strengths And Weaknesses: Please see questions for authors. Other Comments Or Suggestions: Please see questions for authors. Questions For Authors: I list down the strengths and the weaknesses of the work here Strength - The paper proposes a method that can effectively handle reasoning in diffusion model. - The experimental gain is significant, and good for the community to build on. Weakness - The major concern appears to be the key aspect of the model. Overall, I feel the method is leveraging the strong powers of the base model---LVLMs or CLIP and use that to learn a multimodal aligner. This idea has been used previously in video and text representation learning, and this work proposes to extend that to multimodal composition. It is not clear why the authors could not use any other way to learn the joint embedding space. For example, ImageBind, Rohit Girdhar et al. performs this step in the image space. What is the key insight that enables this method to do better beyond existing work? - Same as above, ImageBind by Girdhar et al. is not compared against in the experiments. Why can it not be adapted to the authors' use case? The overall idea is for the model to be compared against a variety of inputs. Overall, I am leaning towards acceptance, but the distinctions and experiments could be made more robust and convincing. . Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Reviewer 37KH** We thank Reviewer 37KH for the insightful comments and suggestions. We provide our responses below. >**Q1. Experiment with Imagebind.** We conduct an experiment with ImageBind-style alignment to align the LVLM decoder and the T5 encoder. The input of LVLM is an image and a text prompt. It generates token features and text. The input of the T5 decoder is the LVLM's generated text. Instead of using only one token similar to the original Imagebind, we extract 32 semantic tokens from both LVLM and T5 to compute the alignment loss. This gives stronger capacity to the alignment. As shown in the table below, our ThinkDiff achieves significant improvements on the CoBSAT benchmark over the ImageBind-style alignment. | Model | Color-I | Background-I | Style-I | Action-I | Texture-I | Color-II | Background-II | Style-II | Action-II | Texture-II | |------------|:-------:|:------------:|:-------:|:--------:|:---------:|:--------:|:-------------:|:--------:|:---------:|:----------:| | **ImageBind** | 0.414 | 0.244 | 0.140 | 0.202 | 0.230 | 0.347 | 0.346 | 0.235 | 0.258 | 0.231 | | **Ours** | **0.622** | **0.349** | **0.237** | **0.459** | **0.290** | **0.511** | **0.534** | **0.340** | **0.534** | **0.292** | >**Q2. Advantages over ImageBind.** While the contrastive alignment in Imagebind shows promising results in multimodal alignment, ThinkDiff introduces more advantages: 1. ThinkDiff exploits the capabilities and knowledge of the LVLM decoder and the LLM (T5) decoder for the alignment, which enables data-efficient and training-efficient alignment, and has been validated in vision-language research areas. On the contrary, Imagebind needs web-scale datasets to train from scratch. 2. ThinkDiff provides a fine-grained and flexible alignment method. Since it does not compute element-wise distances, the tokens of different modalities can have different lengths. This also supports training strategies such as masked training. The alignment is not element-wise minimization but is the high-level semantic alignment. 3. ImageBind typically uses one token to compute the token distances, which is a coarse alignment. ThinkDiff excels at fine-grained alignment for longer token sequences. Even if using more tokens in ImageBind, the lengths of two types of tokens must be the same. Therefore, as shown in the experiment in **Q1**, our method can achieve much better accuracy compared to the ImageBind-style alignment.
null
null
null
null
null
null
Normalizing Flows are Capable Generative Models
Accept (oral)
Summary: The paper proposes a new architecture called TarFlow, which is a Transformer-based variant of Masked Autoregressive Flows (MAFs). TarFlow achieves a high-performance normalizing flow (NF) model by stacking autoregressive Transformer blocks on image patches and alternating the autoregressive direction between layers. Additionally, the paper introduces three techniques to improve sample quality: Gaussian noise augmentation, a post-training score-based denoising technique, and efficient guidance recipes for both the class conditional and unconditional models. Claims And Evidence: I think the claims regarding the normalizing flow part is solid, but I'm not so persuaded in terms of motivation part, which claims that the normalizing flow have seen limited practical adoption. Shouldn't flow matching techniques has already been widely applied? Can author address the foundamental differences of the proposed TarFLow and current flow matching methods? Methods And Evaluation Criteria: 1. In the section that introduces guidance technique, I'm wondering how the class label conditioning is added in the transformer architecture, through input concatenation or cross-attention? 2. The tasks and benchmark datasets make sense, but I think the author should give a little bit introduction to the less commonly-used evaluation criteria like Bits per dim (BPD) evaluation in table 2. Theoretical Claims: This paper does not propose theoretical claims that needs mathematical derivation or proof. Its contribution mostly lies in the design of Transformer-based normalizing flow architecture. Experimental Designs Or Analyses: 1. As mentioned before, I think this paper lacks comparison and discussion with flow matching method, which should be closely related to the realm of normalizing flow. 2. From the quantitative experiments, the performance of the TarFlow is not comparable with diffusion-based method, can the author provide analysis regarding this phenomenon? 3. In generation quantitative experiments, it lacks comparison with other flow and autoregressive model, please add more baseline results. Supplementary Material: Supplementary material provides substantial qualitative results and comprehensive introduction of related works. Relation To Broader Scientific Literature: NA Essential References Not Discussed: None Other Strengths And Weaknesses: One subtle suggestion is that the top visualization part of figure 4 is not clear enough. It's hard to distinguish the difference between the raw samples and denoised samples. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Q: Motivation and difference to flow matching A: We’d like to clarify that the exact notion of Normalizing Flow (NF) we consider is here fundamentally different from the modern notion Flow Matching (FM) method. In our paper, we follow the conventional notation of NFs that exclusively refer to a maximum likelihood method that uses the change of variable technique of probability [1, 2, 3]. One of their distinct properties is that the training loss can be **exactly computed** for each training example. Flow matching [4] on the other hand, denotes a training technique that can be considered a variant of diffusion models, see [5] for such illustration, which does not follow a maximum likelihood objective. Also, the training loss of FM relies on a **stochastic approximation** of an expectation over time steps, and over the noise within each step. Moreover, it was shown that FM’s training objective follows a modified variational lower bound of likelihood [8], which again is fundamentally different from NFs's exact likelihood nature. In fact, this difference is also documented in [4], see the first paragraph of Section 5. The confusion may stem from the reference to continuous normalizing flow (CNF) in [4], which considers a **generalized notion of NFs** which deterministically map noise to data, regardless their training objectives. However, please note that the CNF here is mathematically equivalent to the probability flow [7], or ODE/DDIM style inference path in diffusion models [6]. This is also the reason why we have categorized all diffusion model and flow matching results under the term Diff/FM in our comparisons, see eg Table 2. We hope this clarifies the difference, and we will be happy to make this more explicit in our paper, constraining the scope of our discussion of NFs to methods that directly follow the MLE objective with the change of variable formula. ## Q: Label conditioning implementation A: We adopt a simple implementation of label conditioning, where we add the label embedding directly to the position embeddings in each flow block. During guidance, the unconditional predictions are obtained by averaging the label embeddings of all classes and adding it to the position embeddings. ## Q: Introduction to BPD A: Thank you for the suggestion, we are happy to provide more context on it. In simple terms, BPD measures the average log probability for all the pixels of an image, where the probability is computed as a discrete one among 256 possible pixel values. ## Q: Results worse than diffusion models A: We agree that the current results of TarFlow is still lacking behind the best tuned diffusion models. We argue that for any new type of method, it usually takes time for it to be collectively improved by a community before it reaches the SoTA performance, and this was definitely the case for diffusion models too. We look forward to working with the generative modeling community together to further improve the upper bound of normalizing flow methods. ## Q: Comparison with other flow and autoregressive model A: Our comparisons have been focusing on mainstream continuous modeling method, and we have identified GANs, diffusion/flow matching models as the representative categories. Also note that we did not include traditional NF baselines in the FID comparisons as they were generally under performing and we were not able find comparable FID results reported in the literature. Following the reviewer's suggestion, we will also include representative baselines from Autoregressive models. Also, we would be happy to include more baselines for comparison if the reviewer is able to provide more specific references. ## Q: Figure 4 not clear A: We apologize for the visualization clarity, but the main cause for it is that the noise we apply is indeed very small and it does require a closer look to recognize the effect of denoising. We refer the review to more visualizations in the supplementary material, ie Figure 9-12, which should allow one to observe the effect of noise more clearly. # References [1] Density estimation by dual ascent of the log-likelihood, Tabak et al, Communications in Mathematical Sciences, 2010 [2] Variational inference with normalizing flows, Rezende & Mohamed, ICML 2015 [3] Nice: Non-linear independent components estimation, Dinh et al, ICLR 2014 [4] Flow Matching for Generative Modeling, Lipman et al, ICLR 2023 [5] Diffusion Meets Flow Matching: Two Sides of the Same Coin, Gao et al, ICLR Blogposts 2025 [6] Denoising Diffusion Implicit Models, Song et al, ICLR 2021 [7] Score-Based Generative Modeling through Stochastic Differential Equations, Song et al, ICLR 2021 [8] Understanding Diffusion Objectives as the ELBO with Simple Data Augmentation, Kingma & Gao, NeurIPS 2023
Summary: The paper presents a normalizing flow architecture and training pipeline for image generation that significantly improves previous normalizing flow models and obtains competitive performance when compared with diffusion models and GANs. The architecture uses a masked transformer backbone to implement RealVPN-type partitioned layers. Training is performed on noised images with a fixed level of noise, which is then removed at inference time using Tweedie’s formula, in what essentially is a single step of denoising diffusion. Importantly, this is done using the existing network without additional training. The paper also introduces an interesting guidance scheme inspired by classifier-free guidance but at the level of the attention layers. Claims And Evidence: The claims are convincing and well supported by the experiments, with several ablation studies elucidating the relative contributions of the novel ideas. For reasons explained below, I do not think that the exceptional likelihood performance is particularly interesting, since the data approximately lives in a lower-dimensional manifold. Methods And Evaluation Criteria: The methods and evaluation metrics are in line with the modern literature. Several ideas from transformers and diffusion models are integrated elegantly in the normalizing flow framework. The authors show a good mastery of the modern generative modeling landscape. Theoretical Claims: The paper does not make precise theoretical claims. The formulas and derivations are solid. Experimental Designs Or Analyses: The experiments follow standard image generation and evaluation approaches widely used in the image generation literature. Supplementary Material: I reviewed all the supplementary material. Relation To Broader Scientific Literature: The paper offers an elegant integration of several modern generative modeling ideas" 1) It combines the general RealNVP framework with a masked transformer architecture. 2) It uses a guidance scheme inspired by classifier-free guidance in diffusion models. However, here the guidance is used within the different layers opf the normalizing flow architecture. 3) It uses a denoising technique that uses the basic formulas of generative diffusion models. Essential References Not Discussed: The literature coverage concenring normalizing flows, diffusion models and tautoregressive models is appropriate. Other Strengths And Weaknesses: Strengths: It is very nice to see modern work on normalizing flows, and I think that the authors did a great job in upgrading these models to near the current SOTA using both ideas from transformers and diffusion models. The main strengths of the paper are: The proposed architecture is both elegant and powerful and I think it could offer a starting point for a renaissance in normalizing-flow research even outside computer vision. The guidance scheme, if original, is very interesting and it could be applied to all sorts of transformer architectures. While it is somewhat disappointing that the model needs to work on noised data, I appreciated the elegance of the denoising solution inspired by diffusion theory. I really appreciate how the authors are borrowing elegant ideas from different approaches while integrating them elegantly within the normalizing flow framework. Weaknesses: While the results are convincing, the paper is also a clear indication that normalizing flows, even with very modern components, fail when trained to generate noiseless images. I think that this simply reflects a fundamental problem with likelihood-based models such as NFs when trained on data such as images that are supported on a lower dimensional manifold-like structure. The issue is that the likelihood diverges if the model approaches the correct support of the data, regardless of how well it is fitting the distribution restricted to the manifold. This phenomenon is known as ‘manifold overfitting’ [CITE]. In the case of NFs on noiseless images, the optimum of the loss is situated near or at a singular point of non-invertibility, which leads to unstable training. So said, this is not really a weakness of the paper, which does a good job at characterizing this behavior empirically. Due to manifold overfitting and divergence of the likelihood, I do not think that the likelihood values are of particular interest, since on manifolds a bad generative model can reach arbitrarily high likelihood simply by fitting some section of the support of the data. In fact, in this case we see exceptionally high likelihood values together with FIDs that are good but not exceptional. Again, I do not consider this as a real weakness of the paper, but I do not put much weight in the exceptionally good BPD numbers. Other Comments Or Suggestions: None Questions For Authors: I am really intrigued by your guidance method. Are you sure that it is not currently used in the LLM literature? I do not know the literature well enough to know. If not, it would be very interesting to use it in more conventional transformers as well. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging our contributions and we agree most of your assessments. ## Q: Limitations of likelihood-based models A: An interesting point of discussion the reviewer brought up is the fundamental limitations of likelihood-based models. We agree with the reviewer that likelihood method, especially density estimation such as NFs, can be ill behaved on real data that’s distributed on a narrow manifold. And we also agree that TarFlow needing input noise resonates with this view, which put in other words, suggests that density estimation is indeed a different task than generative modeling. We also believe that this observation is consistent with the findings in diffusion models, whereas the reweighting the likelihood based loss function is beneficial for sampling quality [1, 2]. ## Q: Significance of the BPD benchmark A: A related question is whether one should value the strong BPD results that TarFlow achieves. We believe so, based on a subtle yet interesting point detailed below. Note that the BPD metric evaluates the discrete probability of quantized pixels, rather than the continuous density of real pixel values. This is a key difference because, unlike continuous density, discrete probability can always be faithfully evaluated even when the input has much lower intrinsic dimension (eg, concatenating a constant dimension to a discrete variable multiplies the joint discrete probability by 1, instead of infinity). When using a density estimation method to evaluate BPD, one would first inflate the discrete values $\tilde{x}$ to a local hyper cube $C(\tilde{x})$ that is disjoint from other discrete points (which is the case for the dequantization uniform noise), and convert the density model to discrete probability via integration, as in $\tilde{p}(\tilde{x}) = \int_{x \in C(\tilde{x})} p_{model}(x)dx$ (this is alo explained in the first paragraph of Section 2.4). Due to the same reasoning, the discrete probability result (hence BPD) is comparable among difference families of models, including discrete modeling methods using autoregressive models and continuous ones such as diffusion and NFs. In addition, the BPD metric has a concrete grounding itself, which essentially translates to the theoretical lower bounds for lossless compression of the target distribution [3], which is another way to prove that BPD does not degenerate. Therefore, we believe that TarFlow achieving SoTA BPD among different modeling methods is a strong indication of its raw modeling capacity. ## Q: temperature based guidance for LLMs A: We'd like first to confirm that we did come up with the attention based temperature guidance on our own, and we are not aware of a similar idea being deployed in other context such as LLMs. Like the reviewer, we are also intrigued by its generality and we believe that it does have potential to be applied to all Transformer based generative models as well. For LLMs, specifically, we speculate that guidance had received relatively little exploration likely due to the prevalence of finetuning, and it would indeed be interesting future work to further investigate its compatibility with our temperature based guidance. # References [1] Denoising Diffusion Probabilistic Models, Ho et al, NeurIPS 2020 [2] Variational Diffusion Models, Kingma et al, NeurIPS 2021 [3] IDF++: Analyzing and Improving Integer Discrete Flows for Lossless Compression, Berg et al, ICLR 2021
Summary: This paper proposes to integrate visual transformer architecture into Real NVP Normalizing Flows. Over the past years normalizing flows has been inferior to other types of generative models; particularly when compared to diffusion models. This paper claims that the reason for that is the design limitation of normalizing flows, therefore by using transformer backbones with causal attention mechanisms they are able to improve the generation performance and enhance the scalability of the model. The approach shares similarity with Masked Autoregressive Flows (MAF), where it is extended to patches rather than pixels. Additionally, the input of the model is injected with small noise for smoothing (in probability terms) the discrete data distribution. Which in turn, requires an extra final denoising step, performed using a score based model. The paper also presents a guidance mechanism, similar to the one used in diffusion models. Classifier free guidance in particular. Enabling the generation to be controlled externally. Claims And Evidence: - The paper claims and shows empirically that Normalizing Flows indeed need a more strong deep network architecture to be able to compete with state-of-the-art diffusion models. The paper claims that the approach is scalable, similar to what has been done in diffusion models, however: - The approach is not tested on ImageNet $256\times 256$, it is only trained on AFHQ at this resolution, which is a relatively homogeneous dataset, and there aren't any quantitative metrics that support this claim. - Comparing the reported results at $128\times 128$ actually contradicts with the scalability claim. In contrast to $64\times 64$, where the FID score is inline with the diffusion scores, the result on $128\times 128$ is actually worse. Methods And Evaluation Criteria: Strengths: - The approach is evaluated properly on different datasets. - The results shown in the paper are promising, competitive with state-of-the-art diffusion models. Weaknesses: - The paper is missing an ablation study on the effect of the denoiser choice. - The results without the noise injection are not shown in the paper, especially that the paper emphasizes the importance of it. I think it is very important to highlight the drawbacks of not adding the noise more clearly (diversity, mode collapse, textures, etc.). - The paper shows the results only for causal transformer network, I think there should be an additional ablation study that demonstrate why this choice is preferable over non-causal transformers. Theoretical Claims: N/A Experimental Designs Or Analyses: The design scheme is straightforward, the paper uses transformer based architecture to predict the coupling of the bijection in an auto-regressive manner. Supplementary Material: Yes. - Extends the related work and provide wide literature review - Provides additional details of the technical implementations. - Shows additional results, highlighting the effect of the guidance mechanism, and the post-processing effect on the generation results. Relation To Broader Scientific Literature: The paper highlights the advantage of using capable architectures, transformer networks in particular, in Normalizing Flows. Prior flow based models struggled to compete with state-of-the-art diffusion models, and this work offers a very competitive alternative (for lower resolutions at least). Essential References Not Discussed: Although it is mentioned in the related work section, but it would be much better to discuss the main differences between this method and the one proposed in Patacchiola et al., 2024. Other Strengths And Weaknesses: Other strengths: - The paper is clear and well written. Other weaknesses: - The results without the noise injection are not shown in the paper, especially that the paper emphasizes the importance of it. I think it is very important to highlight the drawbacks of not adding the noise more clearly (diversity, mode collapse, textures, etc.). - Obtaining high-fidelity generation results requires using a denoising network for post-processing the output. - The novelty of the approach is relatively limited, particularly compared to Patacchiola et al., 2024. - Normalizing Flows other than Real NVP are not examined. Other Comments Or Suggestions: - Section 2.5 can be reduced, there is no need to provide redundant details about Tweedie’s relation of score and denoising if your target is just to denoise the final image. Questions For Authors: - What is the denoiser architecture used in the final stage? - What is the overhead and added computational cost of using a denoising network? - How does not adding noise at all affects the results? in terms of diversity, mode collapse, and quantitatively (FID). - In terms of computations, how many - How important is the causality part of the method? Does the method also work if the transformer was not causal? How does that affect the results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review, and please see our responses below. ## Q: Scalability claim A: Our claim about the scalability of TarFlow is within the context of different model sizes and training FLOPs on a given dataset, which is supported by evidence in Sec 3.5 and Figure 6. Moreover, for the comparison of ImageNet 128x128 vs 64x64, we do not agree that the former is worse. Note that the absolute FID numbers are not comparable across datasets, but it is pretty clear that our ImageNet 128x128 samples are visually better than those on 64x64 (Figure 11 vs Figure 9). In addition, we have conducted further experiments on ImageNet 256x256 with a model of ~1.4B parameters, and we are able to achieve very competitive results, with a 50K sample FID of 4.00 on pixel space. This number again places TarFlow in a diffusion dominated regime, see the table below as a comparison. | Method (modeling pixels) | ImageNet 256x256 FID 50K | | -------- | ------- | | Simple Diffusion | 3.75| | ADM-G | 4.59 | | RIN | 4.51 | | TarFlow | 4.00 | ## Q: Denoising & Section 2.5 A: We believe that the reviewer has a serious misunderstanding of our contributions. To clarify, denoising is performed with the same TarFlow model we train, and **we do not use a separate denoising network**. The exact recipe is explained in Equation 8, where $\log p_{model}(y)$ corresponds to the negative training loss of the TarFlow model. This also explains why Section 2.5 is a critical piece of our method, as it is not previously clear that NF models empirically give rise to accurate score estimates, making it a suitable choice for denoising in line with the Tweedie’s formula. Note that this point is also correctly recognized and acknowledged by Reviewer EWAZ as a strength of our paper. As for speed, the denoising step adds a minimum overhead, due to its parallel nature. For actual measurements, we benchmarked the model we used for ablations, ie, the one from Figure 4, it takes 13.5 seconds to generate a noisy batch of examples, and 0.14s to denoise them. ## Q: Results without noise injection A: We assume that the reviewer is interested in the dequantization uniform noise setting, which is a standard in the NF literature [1] — and also the simplest way to make modeling pixels a valid density estimation problem (see, eg, Sec 3.1 of [3] for a discussion on this). As noted in the first paragraph of Section 3.3, this setting has poor numerical stability which makes it difficult to compare with other settings fairly. Nonetheless, during the rebuttal period we managed to train such a model by using fp32 instead of our default bf16, and performed 50K FID evaluation by skipping many samples with NaNs. See results in the table below. ## Q: Non-causal transformer A: First, the causal architecture is a necessary component for implementing the AR flow. More specifically, it allows us to compute the determinant analytically, and also invert the transformation explicitly, due to the Jacobian of the causal transformation being lower triangular. Both conditions will break if one directly applies a non-causal Transformer. That being said, the closest baseline we can think of that’s using a non-causal Transformer is to follow the channel coupling design from [1]. We performed such an ablation, and the result is shown below. ## Q: Normalizing Flows other than Real NVP A: We have also trained a volume preserving version, by enforcing the logdet terms to be zero. See results below. ### Summary of additional ablations, where the baseline experimental setting follows that of Figure 4. All variants have inferior results than the default TarFlow setting. | Variant | ImageNet 64x64 FID 50K, cfg = 0 | ImageNet 64x64 FID 50K, cfg = 2| | -------- | ------- | ------- | | TarFlow default |25.3| 5.7 | | dequantization uniform noise | 43.6 | 21.9 | | non-causal architecture with channel coupling | 50.3 | 20.4 | | volume preserving | 81.5 | 51.0 | ## Q: Novelty A: We believe that TarFlow has significant novelty compared to [2]. The similarity is that they both apply Transformers to MAF. However, it is important to note that [2] does not provide a scalable recipe for high dimensional inputs. In particular, [2] only considers models with a single flow, whereas TarFlow stacks multiple flows with alternating directions, and show that this can be trained well. As shown in Figure 6(b) of our paper, using one flow (T=1) significantly limits the model’s capacity and it becomes a degenerated model for images. The other obvious difference lies in our usage of noise augmented training, denoising and guidance. All these critical pieces are completely missing in [2]. # References [1] Density estimation using Real NVP, Dinh et al, ICLR 2017 [2] Transformer Neural Autoregressive Flows, Patacchiola et al., 2024 [3] Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design, Ho et al, ICML 2019
Summary: This paper scales up masked autoregressive flow (MAF) with powerful transformer architecture along with several techniques such as classifier guidance, noise augmentation and achieve good performance on many datasets including high resolution AFHQ and multimodal dataset Imagenet. ## update after rebuttal The rebuttal has resolved my concerns. After careful consideration, I decided to vote Accept for this paper. Claims And Evidence: Yes, paper provides clear representation and convincing evidence to all claims Methods And Evaluation Criteria: Yes, the methods and evaluation criteria are standard for generative model. Theoretical Claims: There is no theoretical claim in this paper. Experimental Designs Or Analyses: The experimental design is well-conducted and support all the paper's claim. Supplementary Material: No, there is no supplementary material. Relation To Broader Scientific Literature: This paper scale up MAF, an existing technique to train normalizing flow to show the scalability of normalizing flow class of generative models. The paper's result is quite interesting and promising. Essential References Not Discussed: No, I find the reference is sufficient Other Strengths And Weaknesses: **Strength** The paper presentation is clear and easy to understand. The experiment shows promising potential for normalizing flow to scale up, in comparison to diffusion, autoregressive and GAN models The ablation is fully provided Several proposed techniques such as noisy augmentation, transformer backbone for MAF and cfg are first-time investigated in MAF **Weakness**: The sampling time is very slow compared to standard autoregressive models. The architecture of model is quite heavy 8x8 = 64 transformer blocks, which is about 2 time larger than DiT XL The method is not new and a little bit limited novelty since classifier free guidance and data augmentation are existed techniques Other Comments Or Suggestions: No Questions For Authors: Is the anyway to develop faster inference model based on TarFlow ? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging our contributions and we answer each of the questions below. First of all, please note that we do have a supplementary material where we include more experimental settings, related work and results which we believe might be interesting to the reviewer. ## Q: The sampling time is very slow compared to standard autoregressive models. A: Given a sequence length N (ie, number of patches), number of AR flows T and the Transformer depth for each flow L, the inference cost of TarFlow evaluates to $O(TLN^2)$. This cost will be equivalent to that of an autoregressive model on the same sequence but with a depth of $TL$. Therefore, the inference speed comparison between TarFlow and an AR model boils down to the combined depth, and the two families of models actually have the same inference time complexity given the same depth budget. ## Q: The architecture of model is quite heavy 8x8 = 64 transformer blocks, which is about 2 time larger than DiT XL A: Related to the previous question, our design philosophy has been to train strong models and prove that NFs are capable learning objectives, whereas we did not emphasize on reducing the total depth. As for the comparison to DiT, we argue that they are also not directly comparable, due to two reasons. 1. DiT conducts experiments in the latent setting, which is known to simplify the modeling difficulty, whereas TarFlows are pixel based models. 2. The inference mode of DiTs is actually much deeper than TarFlow, due to the need of calling multiple NFEs. ## Q: The method is not new and a little bit limited novelty since classifier free guidance and data augmentation are existed techniques A: We respectfully disagree with the reviewer’s assessment on novelty. It is indeed true that all the essential ingredients in this work has been previously established techniques, this includes MAF, Transformers, noise augmentation, denoising and guidance. However, in the context of normalizing flows, the exact recipe of using them has been unknown, and as the reviewer already acknowledged, we are the first work to show the full potential of correctly combining them together. The fact that these individual techniques are standard only strengthens the simplicity aspect of our work, and it should not be treated as a penalty for novelty. In fact, the same argument can be made for many groundbreaking works — for example, denoising autoencoders are well studied subjects [1], but correctly applying it to generative modeling [2, 3] is still highly novel and it gives rise to the diffusion and score based generative model revolution. ## Q: Is the anyway to develop faster inference model based on TarFlow ? A: Although we did not focus on inference speed in this work, we are excited by several possibilities of improving it in the future work. First of all, there is a large body of literature (eg, speculative decoding [4]) dedicated to speeding up the inference of autoregressive models, which TarFlow can already benefit from, due to its autoregressive architecture. Another promising direction is distillation, which has achieved great success in diffusion models [5]. And we believe that distillation should be naturally compatible with TarFlow, and arguably more so than with diffusion models, as we have an explicit bijective mapping between an input x and noise z. # References [1] Extracting and composing robust features with denoising autoencoders, Vincent et al, ICML 2008 [2] Denoising Diffusion Probabilistic Models, Ho et al, NeurIPS 2020 [3] Generative Modeling by Estimating Gradients of the Data Distribution, Song & Ermon, NeurIPS 2019 [4] Fast inference from transformers via speculative decoding, Leviathan et al, ICML 2023 [5] Progressive Distillation for Fast Sampling of Diffusion Models, Salimans & Ho, ICLR 2023
null
null
null
null
null
null
Structure Is All You Need: Structural Representation Learning on Hyper-Relational Knowledge Graphs
Accept (poster)
Summary: This paper presents MAYPL, a novel structure-driven representation learning method for hyper-relational knowledge graphs (HKGs). Unlike existing methods that rely on transformers or GNNs with limited structural utilization, MAYPL fully exploits the structural properties of HKGs to achieve state-of-the-art link prediction performance. The core idea of MAYPL is to initialize entity and relation representations based on their connectivity patterns, followed by an attentive neural message-passing mechanism that aggregates fact-level, entity-level, and relation-level features. This purely structure-based approach enables MAYPL to perform both transductive and inductive link prediction, making it capable of generalizing to entirely new entities and relations at inference time. The paper demonstrates that MAYPL outperforms 40 baseline methods on 10 benchmark datasets, covering both standard and inductive link prediction tasks. The results highlight that leveraging structure alone is sufficient for effective knowledge graph reasoning, challenging the necessity of complex embedding-based models. Claims And Evidence: The major claims in the paper are largely supported by empirical results and theoretical reasoning: 1. MAYPL fully exploits the structural properties of hyper-relational knowledge graphs (HKGs) for representation learning - The proposed method uses only structural information (without textual embeddings or external node features) and achieves strong results. - The architecture includes fact-level, entity-level, and relation-level message passing, which aligns with the claim of capturing hierarchical structural properties. 2. MAYPL generalizes to unseen entities and relations (inductive reasoning) effectively - Mainly supported by evaluation result. Methods And Evaluation Criteria: Mainly yes, On the method side, MAYPL is designed to fully leverage graph structure rather than relying on embedding-based models, which aligns well with the need for interpretable, structure-driven reasoning in HKGs. The message-passing mechanism across fact-level, entity-level, and relation-level information is a strong design choice for capturing hyper-relational dependencies. Also initializing the embeddings with just structural data can help in inductive setting is reasonable, since that is the only information we have about the new entities or relations. On the evaluation side, the evaluation covers a diverse range of transductive and inductive link prediction datasets, ensures MAYPL’s results are not dataset-specific. The result also proves that MAYPL is effective in both settings. The evaluation metrics are standard in KGC tasks. The case study on most similar relations and entities at the start and end of the training also supports the claims. Theoretical Claims: The work doesn't have proof for theoretical claims. The claim about MAYPL effectively capturing hyper-relational knowledge graph structure through message passing and aggregation, is supported in the model design, and the formulations for message passing across different levels of the graph aligns with standard GNN models. Experimental Designs Or Analyses: - Use of Multiple Benchmark Datasets for a Comprehensive Evaluation - The paper evaluates MAYPL on 10 benchmark datasets, covering both standard and inductive link prediction tasks. - The datasets include: - Traditional KG benchmarks (FB15k-237, WN18RR). - Hyper-relational datasets (JF17K, WikiPeople). - Inductive reasoning datasets (New unseen entities in test). - This evaluation ensures MAYPL is tested on a variety of knowledge graph structures. The inclusion of inductive datasets helps validate generalization capabilities. - Evaluation Metrics (MRR, Hits@K) Are Standard in KG Research - The paper uses Mean Reciprocal Rank (MRR) and Hits@K as primary evaluation metrics. - These metrics align with standard KG completion benchmarks, ensuring results are comparable to prior work. - Inclusion of Inductive Link Prediction Tasks - Many KG models focus only on transductive reasoning, meaning they struggle with new entities. - MAYPL is tested in an inductive setting, showing its ability to generalize beyond the training set. - Component Ablation Study is Useful - The paper includes ablation studies to test the contribution of different MAYPL components. It helps identify which structural properties contribute most to performance and improves interpretability of the model’s reasoning process. Supplementary Material: Yes, here are the materials I reviewed: - Appendix A presents more detailed comparison of MAYPL with previous methods on knowledge representation and inductive inference ability. - Appendix D provides additional information on hyperparameters and experimental results: - Extended benchmark results for all datasets, including breakdowns for different link prediction metrics (MRR, Hits@K). - Hyperparameter settings including learning rate, hidden dimension size, batch size, number of message-passing layers, etc. - Case studies about MAYPL in Appendix G - These examples provide additional support to show how the structural focus design help the model better capture the similarity between entities via similar structural information. Relation To Broader Scientific Literature: he paper presents MAYPL (Message pAssing framework for hYper-relational knowledge graph rePresentation Learning), a novel structure-driven representation learning method for hyper-relational knowledge graphs (HKGs). It builds upon several existing lines of research, including graph-based reasoning for knowledge graphs, hyper-relational embedding methods, message-passing networks, and inductive KG reasoning. Its novelty lies in leveraging multi-level message passing to capture hyper-relational structure without relying on embeddings. The experiments on general KGC, inductive KGC and ablation studies present the effectiveness of learning structural information in improving the quality of representations of hyper knowledge graphs. Essential References Not Discussed: 1. I believe HittER is also quite relevant to this work. Hitter (Hierarchical Transformers for Knowledge Graph Embeddings) (Chen et al, EMNLP 2021) is a transformer-based model specifically designed for hyper-relational knowledge graphs (HKGs). Instead of relying on GNNs or embeddings, HittER models hyper-relational facts as a sequence of tokens and applies attention mechanisms to capture dependencies among entities, relations, and attributes. Both HittER and MAYPL focus on hyper-relational knowledge graphs. While HittER uses transformers for relational reasoning, MAYPL relies on a multi-level message-passing framework to capture structural dependencies. I notice this paper already includes discussion on transformer-based methods, but HittER is one of the SOTA works, so it would be better to include it as well. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. In the introduction, the authors mention some previous works use simple one-hop neighborhood information to learn representations on HKGs which is not effective enough. But in the design of MAYPL, the definition of message passing and structural initialization only involves the co-occur entities/relations, facts and incident entities, right? Those information are also with-in one-hop in the HKG. Could you explain more about how you can guarantee the structural information beyond one-hop are also well-captured? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: >I believe HittER is also quite relevant to this work. Hitter (Hierarchical Transformers for Knowledge Graph Embeddings) (Chen et al, EMNLP 2021) is a transformer-based model specifically designed for hyper-relational knowledge graphs (HKGs). Instead of relying on GNNs or embeddings, HittER models hyper-relational facts as a sequence of tokens and applies attention mechanisms to capture dependencies among entities, relations, and attributes. Both HittER and MAYPL focus on hyper-relational knowledge graphs. While HittER uses transformers for relational reasoning, MAYPL relies on a multi-level message-passing framework to capture structural dependencies. I notice this paper already includes discussion on transformer-based methods, but HittER is one of the SOTA works, so it would be better to include it as well. **R1.** Since HittER[1] is designed for handling vanilla knowledge graphs, not hyper-relational knowledge graphs, a direct comparison of HittER and MAYPL may not be feasible. However, we believe extending MAYPL to QA tasks, similar to how HittER is applied to those tasks, can be an interesting future research. We will discuss this point and cite HittER in this context in the camera-ready version (if this paper is accepted). [1] Chen et al., “HittER: Hierarchical Transformers for Knowledge Graph Embeddings”, EMNLP 2021. >In the introduction, the authors mention some previous works use simple one-hop neighborhood information to learn representations on HKGs which is not effective enough. But in the design of MAYPL, the definition of message passing and structural initialization only involves the co-occur entities/relations, facts and incident entities, right? Those information are also with-in one-hop in the HKG. Could you explain more about how you can guarantee the structural information beyond one-hop are also well-captured? **R2.** As discussed in our introduction, some prior works (e.g., HyperFormer[2]) use simple one-hop neighborhood information. For example, when predicting a missing tail entity in an incomplete fact $((v_1,r_1,?), \\{(r_2,v_2), \cdots, (r_k, v_k)\\})$, HyperFormer collects all facts whose head entity is $v_1$, and each fact is individually encoded using a transformer. Those encoded facts are then averaged to be used for link prediction. Since each fact is independently fed into a transformer and only facts that include $v_1$ are considered, the structural information in this approach is restricted to the one-hop neighbors of the head entity $v_1$.\ &nbsp;&nbsp;&nbsp;&nbsp;In contrast, MAYPL captures multi-hop structural information through its attentive neural message passing mechanism that employs $L$ layers to consider $L$-hop distant neighboring information. In the attentive neural message passing, the message of a fact is computed based on the composition and connectivity information of its entities and relations, as well as their positions. These messages are then attentively aggregated to update the representations of entities and relations. When an entity’s representation is updated in the first layer, it incorporates information from its one-hop neighbors by aggregating facts’ messages to which it belongs. Then, since each entity already encodes its one-hop structural information, the subsequent update of an entity representation facilitates the information of the neighbors of its direct neighbors, effectively capturing two-hop structural information. After $L$ layers, the entity representations are updated by considering their $L$-hop distant neighbors. The same principle also applies to relations. In this way, MAYPL captures the structural information up to $L$ hops. [2] Hu et al., “HyperFormer: Enhancing Entity and Relation Interaction for Hyper-Relational Knowledge Graph Completion”, CIKM 2023. --- Rebuttal Comment 1.1: Comment: Thanks for the response! It's now clear to me how the whole k-hop information is captured, and I will keep my rating as it is. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your valuable review and comments. Thank you very much!
Summary: This paper presents MAYPL, a structure-driven representation learning method for hyper-relational knowledge graphs. MAYPL contributes a structure-driven initializer and attentive neural message passing to learn entity and relation representations. The method is designed to handle transudative and inductive inference settings and reports extensive experimental results across multiple benchmark datasets, showing superior performance compared to existing SOTA methods. Claims And Evidence: The claims made in the submission are generally well-supported by evidence. The authors demonstrate through extensive experiments that their structure-driven approach outperforms existing methods on various link prediction tasks. The results across multiple datasets (WD50K, WikiPeople, NL-50, etc.) consistently show that MAYPL achieves superior performance in MRR, Hit10, and Hit1 metrics. Methods And Evaluation Criteria: The methods make sense for the HKGs problem. The structure-driven initializer and attentive neural message-passing mechanism are effective for performance. However, I have seen too many papers based on RGCN that still make trivial modifications in message passing. This paper's method only shows unique effectiveness in the context of HKG but does not propose a new paradigm. Moreover, the comparison does not include some of the latest state-of-the-art baselines [1][2], which undermines the value of the intensive experimental results. - [1] Hyper-Relational Knowledge Representation Learning with Multi-Hypergraph Disentanglement. WWW2025. - [2] HySAE: An Efficient Semantic-Enhanced Representation Learning Model for Knowledge Hypergraph Link Prediction. WWW2025. - [3] HyperMono: A Monotonicity-aware Approach to Hyper-Relational Knowledge Representation. arxiv Theoretical Claims: Neither is proof for the design of Implant's an experiment-oriented paper. Experimental Designs Or Analyses: No, there is no proof of the design of MAYPL; it's an experiment-oriented paper. Supplementary Material: Yes, I reviewed the supplementary material. The authors provided a comparison with existing methods, a more detailed exp setup, more dataset information and additional results. Relation To Broader Scientific Literature: The paper's contributions are well-situated within the broader literature on KGRL. It builds upon previous work on HKGs while addressing limitations in utilizing structural information. It references prior works in KGC(ind. / trans. setting) and structural representation learning. Essential References Not Discussed: I think the key comparison of recent sota is necessary. And they are highly related to HKG completion tasks. - [1] Hyper-Relational Knowledge Representation Learning with Multi-Hypergraph Disentanglement. WWW2025 - [2] HySAE: An Efficient Semantic-Enhanced Representation Learning Model for Knowledge Hypergraph Link Prediction. WWW2025 - [3] HyperMono: A Monotonicity-aware Approach to Hyper-Relational Knowledge Representation. arxiv There must be others that have escaped my attention, in addition to the three I mentioned. Other Strengths And Weaknesses: ## Strengths - Comprehensive introduction of structural representation learning from traditional RGCN into HKG - Effective test setting handing of ind. Completion ## Weaknesses - Limited discussion of computational complexity and scalability, which decides on real-world applications. A better initializer should contribute to faster convergence and better efficiency, but there is a lack of appropriate discussion. - It remains a question of how MAYPL performs on larger HKGs. - A case study or a more in-depth analysis of how MAYPL works could help readers better understand the method despite its promising but not very informative experimental performance. Other Comments Or Suggestions: Providing more detailed visualizations of the attention mechanisms and how they capture structural patterns would enhance the interpretability of the results. Also, a more comprehensive comparison of recent sota is needed; HKRL seems stronger than MAYPL. Questions For Authors: 1. Structural learning on knowledge graphs (KGs) has already been explored on various KGs. I actually don't see much new from the formulas. I need stronger clarification to demonstrate that this paper's method is a unique structural learning characteristic in the context of HKG. This affects my overall impression of whether the paper is a creative work or simply a modification of the message-passing mechanism, which influences my perception of the upper limit of the score I would assign to this paper. 2. The paper has sufficient experiments, which is a major strength. However, I need to see comparisons with some of the new methods I mentioned that have been overlooked in terms of performance or efficiency to convince me that this is truly a state-of-the-art approach. 3. The training process could achieve the same convergence effect without the proposed initialization, albeit requiring a longer time. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1**\ Message passing is a fundamental concept underlying most existing GNNs for structural learning. Therefore, GNNs are not distinguished by their adherence to the message passing paradigm but by how they compute messages and update representations. MAYPL presents its unique way of encoding messages and updating representations to thoroughly leverage the structural information of HKGs. Even if we scope down to KGs, MAYPL substantially differs from any existing KG methods. For example, while existing fully inductive KG methods construct separate relation graphs that consider only simple incidence relationships between relations, MAYPL directly utilizes the given KG structure and incorporates the interconnections between entities and relations along with the triplets’ composition information. MAYPL’s performance on inductive link prediction on KGs shows its superiority in modeling KGs(Table 4).\ &nbsp;&nbsp;&nbsp;&nbsp;While there have been various structure-based learning approaches for KGs, designing a purely structure-oriented representation learning method for HKGs has remained challenging since the structure of HKGs is far more complicated than KGs. As a result, most existing HKG methods(e.g., GRAN, Hy-Transformer, and HyNT) consider an HKG as a set of individual hyper-relational facts instead of considering an HKG as a graph itself. While StarE and HAHE introduced a GNN-based encoder, they are limited(e.g., ignoring relations) and still rely on a non-GNN decoder for link prediction.\ &nbsp;&nbsp;&nbsp;&nbsp;None of existing HKG methods comprehensively utilize the graph structure of an HKG by considering the multi-hop structural information between entities, relations, and facts along with their composition, co-occurrence, connectivity, and positional information, as MAYPL does. Furthermore, MAYPL does not employ a non-GNN decoder and only involves structure-based learning and prediction. **We emphasize that MAYPL is the first structure-oriented method for HKGs that can be applied in both transductive and inductive learning settings.** **Q2**\ The ICML submission deadline was Jan. 30th, while the preprints of WWW 2025 papers were made public on OpenReview on Jan. 29th. According to the reviewer guidelines, *authors cannot expect to discuss other papers that have only been made publicly available within four months of the submission deadline*. We believe **our paper should neither be evaluated nor penalized by WWW 2025 papers**[1,2]. Besides, we found [1]’s results questionable due to potential test leakage in their code: the model uses other facts from the test set during prediction (line 113 in main.py). But, since the current version is not final yet (WWW 2025 starts on Apr. 28), it can be fixed in the final version. Thus, comparing the not-final results of [1] with MAYPL is inappropriate.\ &nbsp;&nbsp;&nbsp;&nbsp;HyperMono[3] is an arXiv paper that has not been peer-reviewed; the table below shows that MAYPL outperforms HyperMono. Also, we have made every effort to cover all relevant HKG papers published at top-tier venues at the time of submission. |||WD50K|||WP-|| |-|-|:-:|-|-|:-:|-| ||MRR|Hit10|Hit1|MRR|Hit10|Hit1| |HyperMono|0.375|0.522|**0.298**|0.494|**0.657**|0.390| |MAYPL|**0.381**|**0.544**|0.297|**0.519**|**0.657**|**0.444**| [1] Hyper-Relational Knowledge Representation Learning with Multi-Hypergraph Disentanglement, WWW 2025\ [2] HySAE: An Efficient Semantic-Enhanced Representation Learning Model for Knowledge Hypergraph Link Prediction, WWW 2025\ [3] HyperMono: A Monotonicity-aware Approach to Hyper-Relational Knowledge Representation, arXiv **Q3 & W1**\ Our structure-driven initializer **learns the parameters described in Section 4.1, which are used to compute the initial representations of entities and relations**. It is not designed to accelerate convergence or improve computational efficiency. To assess its impact, we conducted an ablation study by removing the structure-driven initializer (check (i) in Table 7), which led to a significant degradation in performance. This confirms the importance of our structure-driven initializer. Regarding scalability, we provided the costs of MAYPL in Tables 14-16. For additional scalability comparison between MAYPL and other methods, please read our response to **C3** of **Reviewer iapV**. **W2**\ HKGs used in our paper, WD50K, WikiPeople, and WikiPeople, are the largest among the most commonly used HKGs available in the literature. **W3**\ We provided case studies in Table 8 and Table 19 to help readers understand how MAYPL works. Table 8 shows that MAYPL operates by first computing reasonable initial representations using the structure-driven initializer and then refining these representations through attentive neural message passing. Table 19 shows that the top 3 predictions of MAYPL include entities of the same type as the answer, and MAYPL accurately predicts answers by appropriately considering the qualifiers in facts.
Summary: This paper proposes MAYPL, a GNN-based method designed for inductive reasoning on hyper-relational knowledge graphs (HKGs), a specific variant of knowledge graphs. MAYPL initializes representation vectors based on the HKG structure and utilizes a structure-driven message-passing mechanism, enabling it to perform inductive inference on unseen entities and relations. Extensive experimental results demonstrate its robust performance, outperforming existing methods across multiple benchmarks. ## update after rebuttal: Thank the authors for their responses. They state that MAYPL is not a foundation model, yet assert its claimed inductive capability on entirely new HKGs. Avoiding direct comparison with ULTRA remains unconvincing. Besides, no concrete solutions were provided to address the model's architectural complexity and inefficiency. Therefore, I would like to keep my score. Claims And Evidence: The claim about model superior performance on inductive reasoning is not convincing enough, please check the comments in Experiments part. Methods And Evaluation Criteria: The structure-driven representation learning in the proposed method makes sense. This design is tailored for the specific structure of facts in HKGs, but lack significant novelty. The initilization and message-passing encoding design are complex and somewhat redundant, leading to inefficiency. For technical design, it would be expected to adhere to Occam's Razor and pursue a more lightweight yet equally effective approach. Given that recent work has already proposed full inductive reasoning models for KGs (such as ULTRA), the technical contribution of this paper appears incremental. Theoretical Claims: The paper lacks theoretical analysis to substantiate the proposed design. Experimental Designs Or Analyses: The evaluation of the proposed method (designed for HKGs) on normal KG inductive datasets raises concerns about fairness. Baselines like NBFNet and RED-GNN are not specifically designed for relation inductive reasoning, making the comparison less equitable. To better demonstrate superiority in inductive reasoning, a comparison with ULTRA under the same pre-training KG conditions would be more appropriate. Supplementary Material: Yes Relation To Broader Scientific Literature: Please check the above suggestions. Essential References Not Discussed: No Other Strengths And Weaknesses: The draft is well-organized and easy to follow. In Tables 14 and 15, the authors present the runtime and memory costs for each dataset. However, these costs are notably high for the small-scale KGs used (with only thousands of entities), which raises concerns about the scalability and practicality of the method. Other Comments Or Suggestions: Please check the above suggestions. Questions For Authors: Please check the above suggestions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: >The evaluation of the proposed method(designed for HKGs) on normal KG inductive datasets raises concerns about fairness. Baselines like NBFNet and RED-GNN are not specifically designed for relation inductive reasoning, making the comparison less equitable. To better demonstrate superiority in inductive reasoning, a comparison with ULTRA under the same pre-training KG conditions would be more appropriate. **A1.** The baselines in our paper, e.g., NBFNet and RED-GNN, have been widely adopted for inductive KG completion literature, including relation inductive reasoning; our experiments also include all available state-of-the-art relation inductive reasoning models[1,2,3]. ULTRA is a foundation model, whereas MAYPL is not a foundation model. While foundation models are pre-trained on multiple KGs and perform link prediction on various KGs, non-foundation models like MAYPL are trained on a single training graph and usually tested on a single inference graph. These fundamentally different training and evaluation settings make a direct comparison between ULTRA and MAYPL inappropriate. As mentioned in “Limitations and Future Work”, we plan to extend MAYPL into a foundation model as future work. [1] InGram: Inductive Knowledge Graph Embedding via Relation Graphs, ICML 2023\ [2] Generalize to Fully Unseen Graphs: Learn Transferable Hyper-Relation Structures for Inductive Link Prediction, MM 2024\ [3] Inductive Knowledge Graph Embedding via Exploring Interaction Patterns of Relations, CIKM 2024 >This design is tailored for the specific structure of facts in HKGs, but lack significant novelty. The initilization and message-passing encoding design are complex and somewhat redundant, leading to inefficiency. For technical design, it would be expected to adhere to Occam's Razor and pursue a more lightweight yet equally effective approach. Given that recent work has already proposed full inductive reasoning models for KGs(such as ULTRA), the technical contribution of this paper appears incremental. **A2.** Our structure-driven initializer and the attentive neural message passing are not redundant, as our ablation studies indicate in Table 7, where (i) and (ii) indicate the cases of removing the initializer and the attentive message passing, respectively. Removing one of these modules degrades performance drastically, indicating both are critical in MAYPL. Indeed, these modules serve distinct roles; the structure-driven initializer considers the interconnections and co-occurrences between entities and relations, whereas the attentive neural message passing computes fact-level messages using compositional information to update the entity/relation representations. This stepwise approach enables MAYPL to effectively capture the diverse structural information of HKGs.\ &nbsp;&nbsp;&nbsp;&nbsp;Existing full inductive reasoning models, e.g., ULTRA and InGram, can handle only vanilla KGs but not HKGs, whereas MAYPL can handle both KGs and HKGs. Even if we scope down to KGs, MAYPL substantially differs from any existing KG methods. For example, while existing full inductive KG methods construct separate relation graphs that consider only simple incidence relationships between relations, MAYPL directly utilizes the given KG structure and incorporates the interconnections between entities and relations along with the triplets’ composition information. MAYPL’s performance on inductive link prediction on KGs shows its superiority in modeling KGs(Table 4). In addition to inductive reasoning, MAYPL also achieves the state-of-the-art performance on transductive HKG reasoning tasks. Please read our response to **C1** of **Reviewer iapV** for how MAYPL is distinguished from prior HKG methods. MAYPL is the first structure-oriented method for HKGs that only utilizes the structure of HKGs from initialization to link prediction. >In Tables 14 and 15, the authors present the runtime and memory costs for each dataset. However, these costs are notably high for the small-scale KGs used(with only thousands of entities), which raises concerns about the scalability and practicality of the method. **A3.** According to Tables 14-16, MAYPL needs 35 minutes and 1.2GB of memory to process WK-50(12K entities, 82K facts). On MFB-IND(3K entities, 337K facts), MAYPL needs 7 hours and 27GB. On WikiPeople(48K entities, 306K facts), the largest dataset in our paper, MAYPL needs 21 hours and 45.8GB. While these demonstrate a reasonable cost relative to the HKGs' scale, one exception is WP-IND. Despite containing only 4K entities and 4K facts, MAYPL needs 8 hours and 1.4GB on WP-IND. This is the only dataset that exhibits high training time compared to its scale, which we attribute to its sparsity, requiring more epochs for training. For a comparison to other methods on scalability, please read our response to **C3** of **Reviewer iapV**, showing that MAYPL is comparable to the baseline methods in scalability while achieving the best prediction performance.
Summary: The paper proposes a structure-driven representation learning method for hyper-relational knowledge graphs (HKGs). Traditional knowledge graph models extend simple triplets into hyper-relational facts by incorporating qualifiers, but many existing methods fail to effectively utilize the structure of HKGs. The authors introduce a novel representation learning approach MAYPL (Message pAssing framework for hYper-relational knowledge graph rePresentation Learning). The study concludes that emphasizing HKG structure can lead to superior performance on the hyper-relational knowledge graph completion. Claims And Evidence: The major claims are well supported by the empirical studies. Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate. Theoretical Claims: N.A. This paper does not include the theoretical analysis. Experimental Designs Or Analyses: Checked. The experiments mainly follow standard procedures. Supplementary Material: N.A. Relation To Broader Scientific Literature: This paper belongs to a subfield of graph learning—hyper-relational graphs—which may have some influence on other areas of graph learning, especially those that focus purely on graph structures without emphasizing node and edge features. Essential References Not Discussed: Overall, the paper provides a well-conducted literature review, covering well-known works on hyper-relational knowledge graphs, knowledge hypergraphs, and N-ary relational data. Other Strengths And Weaknesses: **Strengths** S1. The paper presents a clear and well-structured approach with strong logical coherence. S2. The experimental evaluation is comprehensive, covering major baselines, most commonly used datasets in recent years, and various experimental settings. **Concerns** C1. The title, “Structure Is All You Need: Structural Representation Learning on Hyper-Relational Knowledge Graphs”, may need reconsideration. In the standard knowledge graph setting (including hyper-relational knowledge graphs), the input typically consists only of the graph structure, while node and edge features are often ignored (as they are generally assumed to be absent). In other words, the input itself is already purely structural, meaning that the phrase “Structure Is All You Need” may be meaningless since “Structure Is All You Have”. C2. Compared to Transformer-based models, as well as methods in KHG and NNR, GNNs naturally preserve the topological structure of graphs without losing feature information. From this perspective, the paper primarily upgrades GNN-based methods by hierarchically modeling the main triple graph, the qualifier relationships, and the intrinsic connections between complex facts—essentially serving as a more advanced GNN-based encoder. Therefore, there is a risk that the paper overclaims its core contributions. C3. A crucial aspect not discussed in this paper is the efficiency comparison between methods, including training complexity, inference complexity, and # parameters. Intuitively, this approach might have one of the highest training times and parameter sizes among all methods. Besides Tables 14–16, the authors should objectively discuss and present any efficiency limitations (if they exist), such as training time comparisons, parameter size comparisons, and learning curve comparisons. C4. The entity and relation learning approach in Section 4.1 is intuitive and reasonable. However, does this learning strategy potentially introduce or exacerbate the over-smoothing issue? Beyond the results in Tables 14–16, the authors should provide an ablation study on the impact of the $\hat{L}$ setting. C5. Figure 1 can be further improved to be clearer and more effectively illustrate the core design ideas of the paper. The current version still appears somewhat messy. Other Comments Or Suggestions: N.A. Questions For Authors: Please focus your response on C1, C2, C3, and C4. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **C1**\ While the input of many HKG methods is only an HKG itself without extra features, most existing HKG methods (e.g., GRAN, Hy-Transformer, and HyNT) consider an HKG as a set of individual hyper-relational facts and feed each fact into a transformer independently, disregarding the interconnectedness between different facts; they do not effectively and sufficiently consider the HKG’s structure. While StarE and HAHE introduced a GNN-based encoder, they are limited (e.g., ignoring relations) and still depend on a transformer-based decoder for link prediction. None of existing HKG methods comprehensively utilize the graph structure of an HKG by considering the multi-hop structural information between entities, relations, and facts along with their composition, co-occurrence, connectivity, and positional information, as MAYPL does. Furthermore, MAYPL does not employ a non-GNN decoder and only involves structure-based learning and prediction. MAYPL’s superior performance on a range of link prediction tasks demonstrates the importance of thoroughly leveraging the structural information of HKGs, and our paper title intends to emphasize this point. **C2**\ MAYPL is a standalone, end-to-end GNN model, not just a more advanced GNN-based encoder. Note that MAYPL does not require any non-GNN decoder, and this distinguishes it from existing HKG approaches. Unlike existing HKG methods that employ GNN-based encoders but rely on non-GNN decoders, such as transformers, for link prediction, MAYPL directly leverages the parameters learned in the structure-driven initializer and the attentive neural message passing, thereby eliminating the need of an external decoder for link prediction. MAYPL only utilizes the structure of HKGs from initialization to link prediction, which turns out to be crucial for reasoning on HKGs; this is the claim we make in our paper. **C3**\ On WikiPeople-, we compare MAYPL and other methods regarding the training time, the number of parameters, and MRR, where we present all baselines whose training times and parameter sizes are available. ||Training time|# parameters|MRR| |-|-|-|-| |StarE|4 days|8.2M|0.491| |GRAN|10h|>8.9M|0.503| |Hy-Transformer|5h|7.8M|0.501| |HyperFormer|N/A|67.0M|0.473| |HAHE|12h|30.2M|0.509| |MAYPL|20h|10.5M|0.519| MAYPL requires 20 hours of training, less than a quarter of StarE and ~1.7 times of HAHE. Also, MAYPL requires one-third of the parameters of HAHE and one-sixth of HyperFormer while using ~1.3 times the parameters of StarE and Hy-Transformer. In MRR, MAYPL outperforms all baselines. While we briefly mentioned our plans for improving the scalability of MAYPL in “Limitations and Future Work”, we can expand them and include the above comparison in the camera-ready version with the extra page.\ *[Details]* HyperFormer didn’t report its training time. While we can’t report the exact parameter count of GRAN due to its outdated environment, we know it requires at least 8.9M parameters since it learns individual entity/relation representations. **C4**\ We believe you meant $\tilde{L}$. We provide an ablation study on the impact of $\tilde{L}$. The table below shows MAYPL’s MRRs regarding different values of $\tilde{L}$ on WK-50 and WD20K(100)v2. The last row is the best baseline methods' MRRs. |$\tilde{L}$|WK-50|WDv2| |-|-|-| |1|0.089|0.277| |2|0.086|0.272| |3|0.109|0.298| |4|0.088|0.281| |5|0.095|0.262| |6|0.090|0.248| |7|0.096|0.235| |8|0.097|0.263| |9|0.095|0.256| |10|0.095|0.249| |Best baseline|0.076|0.067| While $\tilde{L}$ affects the performance, there is no evidence of over-smoothing since there is no tendency that a larger $\tilde{L}$ continuously degrades the performance. While $\tilde{L}$ is a tunable hyperparameter, MAYPL consistently outperforms the best-performing baselines across all values of $\tilde{L}$.\ &nbsp;&nbsp;&nbsp;&nbsp;For deeper analysis, we analyzed the Dirichlet energy[1] of MAYPL trained with $\tilde{L}=10$, computed at each layer of our structure-driven initializer. Following Equation (3) of [1] and applying it to HKGs, we computed the Dirichlet energy values across different layers, shown in the table below: |$\tilde{l}$|1|2|3|4|5|6|7|8|9|10| |-|-|-|-|-|-|-|-|-|-|-| |WK-50|23.6|25.7|25.4|26.8|26.6|27.6|23.6|24.9|22.9|26.7| |WDv2|54.4|57.6|58.3|59.5|60.8|68.4|68.2|67.3|67.2|55.8| According to [1], the Dirichlet energy should constantly decrease as the depth increases when over-smoothing occurs. In our structure-driven initializer, the Dirichlet energy does not consistently decrease with increasing depth, indicating that over-smoothing does not occur. Additionally, as discussed in Appendix D.1, we employ residual connections and layer normalization in all layers of MAYPL, which are commonly-used strategies for preventing over-smoothing in GNNs [2]. [1] “A Survey on Oversmoothing in Graph Neural Networks”, arXiv 2023.\ [2] “Residual Connections and Normalization Can Provably Prevent Oversmoothing in GNNs”, ICLR 2025.
null
null
null
null
null
null
Understanding the Kronecker Matrix-Vector Complexity of Linear Algebra
Accept (poster)
Summary: - The paper addresses several fundamental linear algebraic problems in the Kronecker matrix-vector query model - Its main contribution is a proof that it requires at least an exponential number of Kronecker matrix-vector products to get a decent estimate of key properties of the matrix - It also proves that Kronecker matrix-vector algorithms with small-alphabet queries require polynomial complexity for zero testing, whereas it takes O(1) in the non-Kronecker case - The analysis reveals new insights on the fundamental complexity of linear algebra in the Kronecker matrix-vector model. Claims And Evidence: All claims made in the submission are well supported by clear and convincing evidence as far as I am aware of. Methods And Evaluation Criteria: Proper mathematical tools are used for proving the theoretical claims in the paper. Theoretical Claims: I have to admit that I did not check the correctness of the proofs due to the time limit. Experimental Designs Or Analyses: There is no experimental design or analysis in the paper, as it is a pure theoretical work. Supplementary Material: I did not review the appendix in the submission. Relation To Broader Scientific Literature: - The theoretical result reveals new insights on the fundamental complexity of linear algebra in the Kronecker matrix-vector model. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: - The paper is very well organized and written. - It is undiscussed how the theoretical merit may affect practical applications that involve Kronecker matrix-vector models Other Comments Or Suggestions: - I think there is a typo in Lemma 8 - ... universal constants $C_\tau, C_\delta > 1$ ... should it be $C_\tau, C_0$ instead? Questions For Authors: How could your result on the exponential complexity of Kronecker matrix-vector models affect its applications, such as quantum information science or medical imaging? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the positive feedback, and for catching the typo in Lemma 8 – we will correct it! * **On the practical impact:** Our exponential ($\exp(q)$) lower bounds for the generic KMVP model serve as a crucial baseline, demonstrating that applications requiring efficiency *cannot* treat the tensor as an arbitrary black-box accessed only via KMVPs ($A(\otimes x_i)$ queries). They *must* exploit known structural properties. This implies two paths forward: 1. **Avoid problematic queries:** Our results on small-alphabet algorithms (Section 5) directly advise against using such sketches (e.g., Rademacher vectors with entries in $\{\pm 1\}$) in downstream applications, as used in practice [Feldman et al., Rakhshan Rabusseau], favoring continuous distributions like Gaussian instead for tasks like zero-testing. 2. **Leverage structure:** Efficient algorithms must use more structural information. Our bounds quantify the "cost of ignorance" when restricted to KMVPs. This justifies why successful tensor methods often use algorithms tailored to specific representations (like [Ahle et al.], exploiting data storage) or leverage query types beyond standard KMVPs that are enabled by that structure (e.g., MPO-vector products for TT-matrices [Rakhshan Rabusseau]). For low-order tensors (e.g., $q=3$ or $q=4$ in medical imaging), an $\exp(q)$ dependence might be tolerable, but our results highlight the scaling limitations for higher-order tensors common in other fields like quantum information science. Thanks for the question! We will incorporate this discussion into the paper.
Summary: This paper studied the Kronecker matrix product oracle complexity lower bounds for estimating the trace and spectrum of a matrix $A$. The authors showed that for a matrix $A\in R^{n^q\times n^q}$ and vector $x = \otimes x_i$, $x_i\in R^d$ and $i\in [q]$, to estimate $tr(A)$ or $\lambda_1(A)$, it requires at least $C^q$ products between $A$ and R$x$. The result relies on a novel probability bound on the inner product of 2 uniform kronecker-product-vectors. The result also implied that if $x_i\in\{1,-1\}^n$, it takes $2^q$ queries to kronecker matrix products to $A$ to test whether $A = 0$ or not, while if $x_i$ are gaussian vectors it only requires 1 query. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I checked the proofs in the main body. Experimental Designs Or Analyses: Theory paper. N/A Supplementary Material: I checked App A about the proof of lem 8. Relation To Broader Scientific Literature: The work nices extends the thread of research of the computation complexity of tensor calculations. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I think this paper is clearly written with many useful intuitions and explanations. I am not very familiar with the literature on the oracle complexity of tensor calculations using kronecker products. However, I think this paper has a significant contribution in the computational (oracle) complexity for tensor calculations. The paper provided a very interesting separation in the oracle complexity between the methods using non-kronecker products and knronecker products, as well as an separation in the performance of knronecker product based algorithm between large alphabets and small alphabets. The techniques in this paper might also be of independent interest for other fields. Other Comments Or Suggestions: N/A Questions For Authors: I am wondering how would the sparsity of A contributes to the oracle complexity of kronecker products? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their appreciation of our work and the insightful question! * **On the effect of sparsity:** If $A$ is sparse, we do not particularly expect this to drastically reduce the *oracle* query complexity. Suppose we aim to show that $s$-sparsity reduces complexity. We can take our dense lower bound instance $A \in \mathbb{R}^{n^q \times n^q}$ and embed it into a larger zero-padded matrix $B \in \mathbb{R}^{m^q \times m^q}$ (where $m \approx n/s^{1/q}$) such that $B$ is $s$-sparse but information-theoretically equivalent to $A$ regarding KMVP queries involving the non-zero block. Our $\exp(q)$ lower bounds would still hold for recovering information about the original $A$ via queries to $B$. This indicates that the hardness demonstrated by our lower bounds stems from the limitations of the KMVP queries interacting with worst-case matrix constructions, rather than simply the density of the matrix. It might be possible to get sparsity-dependent bounds with careful parameterization, but simple sparsity alone doesn't bypass the worst-case lower bound in this model. Thanks for the question!
Summary: Given a matrix $A$, this paper considers estimating the top eigenvalue and the trace of $A$ by matrix-vector multiplications in which the vector is the Kronecker product of $q$ vectors. The main results of the paper are that constant factor estimation of these values require exponential in $q$ such matrix-vector multiplications. Another result states that testing whether $A=0$ requires exponential in $q$ Kronecker matrix-vector multiplications if the $q$ vectors come from the Radamacher distribution. This is in contrast with the Gaussian case. All of the results of the paper rely on the fact that a random Kronecker product vector is almost orthogonal to an arbitrary Kronecker product vector with high probability. In fact, the “level of orthogonality” is smaller than what one expects for general vectors by a factor exponential in $q$. This is not very hard to see from the algebraic structure of the Kronecker product. Claims And Evidence: Some claims regarding the importance of this result for tensor decompositions are not justified in my opinion. Methods And Evaluation Criteria: The paper is theoretical and this is not applicable. Theoretical Claims: I have not checked the proofs carefully, but the results sound natural and correct to me. Experimental Designs Or Analyses: The paper is theoretical and this is not applicable. Supplementary Material: I have not checked the supplementary material. Relation To Broader Scientific Literature: It is not clear what the exact implications of these results are to the broader literature. Prior works have established exponential bounds for embeddings which I believe are important for the sampling literature, but the current results do not seem to give any implications for that either. Essential References Not Discussed: I don't know any important references that have not been discussed. Other Strengths And Weaknesses: The results of the paper are not very surprising in my opinion and they are not aligned with the motivations about tensor decomposition with which the paper starts. More specifically, the paper argues that their results have strong implications for tensor decomposition since in many cases, one does not want to unravel the decomposition’s structure to test properties related to it, and therefore, it is more suitable to consider the matrix-vector multiplication model. However, the results of the paper are for general matrices and not the structured matrices arising from tensor decompositions. I don’t find the results very appealing if they are not generalizable to such structured matrices, and I don’t see how they could be generalized to such matrices. Other Comments Or Suggestions: I don't have any other comments or suggestions. Questions For Authors: My main question is whether these results are generalizable to the structured matrices that appear in tensor decompositions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for raising this critical point about the relationship between our general matrix lower bounds and specific tensor decomposition applications. We understand the concern that our worst-case instances might not be compactly representable. However, our work's primary focus is on understanding the fundamental limitations imposed by the **Kronecker matrix-vector product (KMVP) query model itself**. We view this model, where interaction is limited to queries of the form $A(\otimes x_i)$, as a natural, restricted way to interact with large linear operators (which may or may not arise from tensors), and we aim to characterize its inherent power. Crucially, as the reviewer notes, some tensors *are* compactly representable. Our results in Section 5 for Zero-Testing provide a concrete example where the lower bound instance *is* a sparse rank-one tensor, exactly representable in standard formats (CP, Tucker, TT, etc.). For this directly relevant case, we prove an exponential ($\exp(q)$) separation between small-alphabet sketches (like Rademacher, used in practice) and large-alphabet sketches (Gaussian), showing the former fail dramatically even for this simple task. This yields the direct, practical implication that small-alphabet sketches should be avoided in tensor algorithms relying on KMVP-like queries. Regarding the bounds in Section 4 for potentially non-compact matrices: these results are vital because they establish the **baseline difficulty** of solving fundamental problems when restricted *solely* to KMVP queries. The exponential ($\exp(q)$) complexity proves that **any algorithm achieving sub-exponential performance in this model *must* implicitly or explicitly leverage structural properties of the matrix beyond what is accessible through generic KMVP queries.** If an algorithm treats the matrix purely as a black-box accessible only via KMVPs, it *will* fail on our hard instances. Therefore, our work rigorously demonstrates *why* successful tensor algorithms often cannot afford to be structure-oblivious; they must exploit the specific tensor representation (like TT/Tucker) either through specialized queries or direct manipulation of factors. This directly addresses the dichotomy noted in our introduction [Lines 36-41]: our lower bounds explain the inefficiency of structure-oblivious methods operating within the KMVP framework and quantify the inherent cost associated with this restricted oracle access. In summary, our results provide fundamental insights into the KMVP query model. The zero-testing bounds offer direct practical guidance, while the general lower bounds rigorously justify the necessity of structure-aware algorithm design for tensors when seeking efficient solutions. We hope this clarifies the significance of our findings, and we will refine the paper's exposition to better emphasize these connections. Thank you again for the feedback!
Summary: The authors study a computational model where a matrix A can only be accessed through matrix-vector products Ax where x has the specific form of the Kronecker product of q vectors. The paper establishes several key results: - The authors prove exponential lower bounds (in terms of q) on the number of queries needed to estimate properties such as the trace and top eigenvalue of a matrix, when restricted to using Kronecker matrix-vector products. These bounds apply to all adaptive algorithms under a mild conditioning assumption. - They demonstrate a fundamental gap between different types of random vectors. - The core technical insight driving these results is that random vectors with Kronecker structure have exponentially smaller inner products compared to their non-Kronecker counterparts, fundamentally limiting information extraction per query. Claims And Evidence: The claims made in the paper are supported by rigorous mathematical proofs. The authors provide formal theorems, lemmas, and detailed proofs for their main results. The conditioning assumption they make for their lower bounds (requiring that the matrix of query vectors is not ill-conditioned) is reasonable and well-justified, as they explain why most practical algorithms naturally satisfy this condition. The paper effectively connects theoretical results to practical implications, explaining previously observed phenomena in prior work through a coherent mathematical framework, which adds credibility to their claims. Methods And Evaluation Criteria: The methods used are appropriate. The authors: - Define the relevant computational model precisely (Kronecker Matrix-Vector Oracle). - Establish formal proof frameworks for lower bounds. - Use appropriate information-theoretic techniques adapted from prior work. - Derive matching upper and lower bounds for certain problems. Theoretical Claims: The theoretical analysis appears sound: - Lemma 8, which establishes the near-orthogonality of random Kronecker-structured vectors, forms a critical foundation for the other results. The proof appears sound. - The proofs of Theorems 6 and 7 (lower bounds for spectral norm and trace estimation) build on established information-theoretic techniques but extend them to the Kronecker setting. - The zero-testing complexity results (Theorem 18) correctly establish both upper and lower bounds for different alphabet sizes. Experimental Designs Or Analyses: This is a theoretical paper without experimental components. Supplementary Material: I skimmed through it but did not read it thoroughly. Relation To Broader Scientific Literature: The paper effectively connects to several areas of scientific literature, such as lower bounds for matrix-vector algorithms, tensor sketching, Kronecker variants of randomized linear algebra methods, etc. Essential References Not Discussed: To the best of my understanding, the authors have covered the main results in the literature. Other Strengths And Weaknesses: Strengths and weaknesses have been highlighted in other questions. Other Comments Or Suggestions: None. Questions For Authors: 1. The results establish exponential lower bounds for the worst case. Are there natural subclasses of matrices or tensors for which the Kronecker matrix-vector complexity is not exponential? 2. How do your results extend to approximate tensor representations? Do the same exponential lower bounds apply when we only seek approximate solutions? 3. How does the trade-off between query complexity and accuracy behave? Specifically, if we relax the accuracy requirements (e.g., allow larger approximation errors), can we achieve sub-exponential query complexity? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments and insightful questions! * **On subclasses with non-exponential complexity:** That's an excellent question exploring the boundary of our worst-case results. Information-theoretically, if a matrix class $\mathcal{A}$ has $D$ degrees of freedom (e.g., $D = O(kqn^2)$ if $A$ is a sum of $k$ Kronecker-structured symmetric matrices), then $\approx D$ KMVP queries should suffice to identify $A \in \mathcal{A}$. However, our work focuses on the *computational* query complexity within the restricted Kronecker matrix-vector product (KMVP) model, where queries are of the form $A(\otimes x_i)$. Our results show that *without* assuming specific structure that is *exploitable* via KMVP queries (or alternative queries enabled by the structure, like Tensor Train-vector products for TT-matrices), the complexity is exponential, specifically $\exp(q)$. Therefore, efficient algorithms *necessitate* leveraging such structural assumptions, confirming that MPO/TT or similar structured classes are indeed the candidates where sub-exponential complexity might be achievable, precisely *because* they allow breaking away from the limitations of the generic KMVP oracle. * **On approximate tensor representations:** We interpreted this as "given access to an arbitrary tensor $T$, can we show exponential lower bounds against the number of KMVPs needed to recover an approximate compact tensor representation of $T$?” Based on Lemma 45 (appendix), which shows $\exp(q)$ queries are needed to find a Kronecker vector $v$ non-trivially correlated with a planted rank-one Kronecker vector $u$ in noise ($A = W + \lambda uu'$), our answer is partially yes. This suggests that even finding low-rank tensor *structure* within noise is hard in the KMVP model. While we haven't rigorously proven "$\exp(q)$ queries are needed to produce a near-optimal TT approximation to $A$," Lemma 45 strongly indicates the difficulty of recovering even approximate structure efficiently via KMVPs alone. * **On the accuracy-complexity trade-off:** Our results suggest this trade-off is poor in the KMVP model. For zero-testing from a small alphabet (Section 5), algorithms with polynomial query complexity incur infinite relative error with high probability. For the not-ill-conditioned lower bounds (Section 4), Theorem 6 (spectral norm, $\lambda_1(A)$) already demonstrates exponential multiplicative error even with exponentially many queries. Theorem 7 (trace estimation, $\mathrm{tr}(A)$) likely implies a similar outcome via Lemma 37. Thus, for these fundamental problems, simply relaxing the accuracy requirement to constant multiplicative error does not appear to break the $\exp(q)$ barrier within the KMVP model, according to our analysis. Thanks again for your questions! --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their feedback on my questions. I will keep my overall recommendation.
null
null
null
null
null
null
Measuring Diversity: Axioms and Challenges
Accept (poster)
Summary: This paper examines in depth the problem of quantifying diversity for a set of objects, a concept widely used in various fields such as image generation, molecule generation, and recommendation systems. The authors conduct a systematic review of existing diversity measures and highlight their undesirable behaviors in certain cases. Based on this analysis, they formulate three desirable properties (axioms) for a reliable diversity measure: monotonicity, uniqueness, and continuity. The paper demonstrates that none of the existing measures simultaneously satisfy these three axioms, thus suggesting their inadequacy for a rigorous quantification of diversity. Subsequently, the authors construct two examples of measures that possess all the desirable properties, proving that the set of axioms is not self-contradictory. However, these constructed examples turn out to be computationally too complex for practical use, leading to an open problem: that of designing a diversity measure that satisfies all the axioms while being efficiently computable. # update after rebuttal No need to change my score. Claims And Evidence: This is a purely theoretical paper, which can have a definitive impact on the practice. All the claims are thoroughly either illustrated or demonstrated. The paper is well written and richly illustrated. Although the paper is theoretical and contains no experiments, I like it, and I believe it shall be published. Methods And Evaluation Criteria: There is no application, this is a purely theoretical paper. Theoretical Claims: As far as I have been able to check, the proofs are correct. Experimental Designs Or Analyses: There is no experimental evaluation per se. However, the examples illustrating the different claims are convincing. Supplementary Material: The supplementary material (the appendix) contains the proofs, that I read. Relation To Broader Scientific Literature: The interest of the paper is to provide insights into measures of diversity that are frequently used in practice. As none of these measures satisfy the desired properties, it is important for the end-user to understand why and when they are failing, so that the measures can be used properly. Furthermore, the question of having an usable measure that satisfy the three desirable properties, which is an open problem in the paper, is of great interest. Essential References Not Discussed: None Other Strengths And Weaknesses: -- Other Comments Or Suggestions: -- Questions For Authors: -- Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and support! We sincerely appreciate that you acknowledge theoretical soundness of our research and its relevance to practice.
Summary: This paper studies the problem of diversity measurement and proposes three axioms—monotonicity, uniqueness, and continuity—as necessary conditions for a reliable diversity measure. The authors analyze existing diversity measures and demonstrate that none satisfy all three axioms. To address this gap, the paper constructs two theoretical diversity measures that adhere to the proposed axioms but are computationally infeasible. The work highlights an open challenge: finding a computationally efficient diversity measure that aligns with the theoretical principles. Claims And Evidence: The paper presents the following key claims: 1.The three proposed axioms (monotonicity, uniqueness, and continuity) establish a theoretical basis for evaluating diversity measures. 2.The paper systematically reviews existing diversity measures and demonstrates that they fail to satisfy all three axioms. 3.Two new diversity measures (MultiDimVolume and IntegralMaxClique) are proposed that satisfy the axioms but have NP-hard complexity. The claims are well-supported by rigorous theoretical analysis, but the lack of experimental validation weakens their practical impact. Methods And Evaluation Criteria: The axiomatic approach provides a novel and well-structured evaluation framework.The systematic review of existing methods is insightful and identifies key limitations. However, the absence of benchmark datasets and experimental results limits the validation of the proposed axioms in real-world applications. The evaluation criteria focus solely on theoretical correctness, neglecting computational efficiency and practical applicability. Theoretical Claims: The theoretical claims are well-structured and logically sound. And proofs are detailed and rigorously support the proposed axioms. However, no evidence is provided to show whether satisfying these axioms leads to improved diversity measurement in practical scenarios. Experimental Designs Or Analyses: No experiments are provided to validate the proposed axioms. The paper does not test existing diversity measures on real-world datasets. The computational feasibility of the new diversity measures is not explored through empirical benchmarks. Supplementary Material: The appendix provides extensive proofs but lacks practical implementation details. Relation To Broader Scientific Literature: The paper builds upon existing work on diversity measurement but does not discuss its implications for real-world applications such as active learning, data augmentation, or clustering Author should discuss potential applications of the proposed framework in real-world machine learning tasks. Essential References Not Discussed: The paper does not reference works on diversity in active learning, dataset selection, or representation learning. Other Strengths And Weaknesses: Strengths 1.The paper introduces a novel axiomatic framework for diversity measurement, which provides a theoretical foundation for evaluating and comparing different diversity measures. 2.The paper provides well-structured proofs to demonstrate that existing diversity measures fail to satisfy all three proposed axioms. The authors also construct two diversity measures that adhere to these axioms, proving their feasibility in a theoretical context. Weaknesses 1.While the paper proposes diversity measures that satisfy the axioms, they are computationally infeasible (NP-hard), limiting their applicability in real-world scenarios. The paper does not explore approximations or alternative methods that balance theoretical soundness with practical efficiency. 2.The paper does not provide experimental results to demonstrate the impact of the proposed axioms on real-world datasets. The theoretical framework is well-developed, but its practical implications remain unclear without empirical evaluation. 3.The paper does not sufficiently explore how these axioms might be relevant to practical machine learning tasks, such as dataset selection, diversity-aware training, or clustering evaluation. 4.The proposed framework assumes that all pairwise distances are equally important, neglecting the possibility that hierarchical structures or different levels of diversity may be relevant in real-world applications. Considering weighted distances or context-aware diversity measures could improve the framework’s applicability. Other Comments Or Suggestions: Please see the above weakness Questions For Authors: Please see the above weakness Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your suggestions and positive feedback! We appreciate that you find our approach novel, well-structured and theoretically sound. We address the raised concerns below. > The paper does not explore approximations or alternative methods that balance theoretical soundness with practical efficiency. In the paper, we briefly discuss that some measures can still be suitable in some applications despite not satisfying all the properties. E.g., in Section 3, we discuss undesirable behavior w.r.t. optimization and note that not all measures have this problem. In lines 204-211, we mention that Bottleneck, SumBottleneck and Energy behave well w.r.t. optimization and thus can be used as target measures when constructing a diverse dataset. Energy has all the desirable properties when all elements are unique. When some elements coincide, this measure becomes infinite and thus the properties are not satisfied, which makes Energy inappropriate for comparing diversity of different datasets. On the other hand, while we do not advise optimizing Average, its value is interpretable and can be used as one of the measures evaluating dataset diversity. > The paper does not test existing diversity measures on real-world datasets. To demonstrate that the choice of diversity measure is important and to illustrate shortcomings of a particular measure, we conducted the following experiment. We consider the setup of generating structurally diverse graphs from Velikonivtsev et al. (2024). Using the code publicly shared by the authors, we compare diverse graphs generated by a genetic algorithm while optimizing either Energy (as in the original paper) or Average. The obtained results can be found [here](https://anonymous.4open.science/r/ICML_2025_rebuttal-3966). We see that optimizing Average (portrait_genetic_optimizing_avg.pdf) leads to more similar graph structures that tend to be either too dense or too sparse, which agrees with our observations about corner cases for Average in Section 2. In turn, Energy is suitable for optimization and thus leads to more structurally diverse graphs. > No experiments are provided to validate the proposed axioms. Validating axioms is challenging: one can potentially compare two measures, one of which satisfies a particular axiom while the second one does not, but the result will not allow one to make conclusions about the axiom since there can be other properties affecting the comparison of the measures. That is why we first analyzed different measures and observed their failure cases and then formulated properties that are intuitively desirable for all diversity measures. If one knows which properties a particular measure does or does not satisfy, one can better interpret the obtained results. > Author should discuss potential applications of the proposed framework in real-world machine learning tasks. Thank you for your suggestion. We will include a deeper discussion on how diversity measurement is used in machine learning tasks. Specifically, we will highlight that a well-defined diversity measure ensures informative sample selection in active learning, guides the generation of meaningful synthetic data in augmentation, and provides a way to assess cluster distinctiveness. > The appendix provides extensive proofs but lacks practical implementation details. Note that the main contribution of our paper is theoretical and the paper does not contain experiments. However, if there are any questions regarding potential implementations or applications, we will be happy to address them. > The evaluation criteria focus solely on theoretical correctness, neglecting computational efficiency and practical applicability. Let us remark that in the paper we pay attention to the computational efficiency of diversity measures: asymptotic computational complexity is analyzed and reported in Table 2. Both MultiDimVolume and IntegralMaxClique are NP-hard which is why we pose an important open problem on whether there exists a more efficient measure that satisfies all the properties. > The computational feasibility of the new diversity measures is not explored through empirical benchmarks. Let us note that the main goal of constructing these two measures was to show that our set of axioms is not self-contradictory. Since these measures are NP-hard, they cannot be used in most applications. That is why we believe that the open problem that we pose is important and hope that it will be addressed in future studies. > Considering weighted distances or context-aware diversity measures could improve the framework’s applicability. Thank you for this suggestion! We believe that studying more complex scenarios is an important direction for future research. In our paper, we address the simplest case and observe that it is already challenging. We are open to further discussions!
Summary: This paper explores how to quantify diversity for a set of objects. The authors first review existing diversity measures, showing that they can have undesirable behaviors. To address this, the paper suggests three properties that a diversity measure should have: monotonicity (diversity should increase as pairwise distances between objects increase), uniqueness (replacing an object with a duplicate should decrease diversity), and continuity (diversity should be a continuous function of pairwise distances).   The paper shows that none of the existing measures satisfies all three properties. The authors provide examples of measures that do satisfy the properties, but these are complex for practical use. The paper concludes by posing the open problem of finding a measure that has all three properties and is computationally feasible. Claims And Evidence: The authors support their claim that existing diversity measures exhibit undesirable behavior by providing examples and visual representations in Section 3.   In Section 4, the authors clearly define the axioms and provide justifications for why these axioms are important for a reliable diversity measure. The proofs for the properties of existing measures and the proposed measures are detailed and can be found in the appendices. One potential issue I think is with Axiom 3 (Continuity). A diversity function must be continuous. The authors claim that the function should naturally be continuous, and provide a counter example in Appendix. In this example, I do not agree the point that `the right configuration is intuitively more diverse`. The diversity in this case depends on the distance measure. For example, even with continuous functions, if you have very high penalty on duplicates (and/or near duplicates), you will still get such behavior, so the problem is not with continuity. Methods And Evaluation Criteria: N/A Theoretical Claims: The claims look good to me except the one with continuity mentioned above. Experimental Designs Or Analyses: N/A Supplementary Material: I reviewed the appendix. Relation To Broader Scientific Literature: The paper makes significant contributions to the scientific literature by analyzing existing diversity measures, proposing a new set of axioms, identifying a gap in the literature, and proposing new measures that adhere to the axioms. The authors also clearly position their work within the context of prior research and highlight the direction for future work. Essential References Not Discussed: There has been a trend in methods that use human feedback/judgment to learn a high-dimensional distance measure [1] or abstract diversity metrics [2]. Given that the authors touched human intuition on diversity, a discussion on these related work would be helpful to understand how the axioms and challenges apply to more practical settings such as GenAI. [1] Fu, Stephanie, et al. "DreamSim: Learning New Dimensions of Human Visual Similarity using Synthetic Data." Advances in Neural Information Processing Systems 36 (2023): 50742-50768. [2] Ding, Li, et al. "Quality diversity through human feedback: towards open-ended diversity-driven optimization." Proceedings of the 41st International Conference on Machine Learning. 2024. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: See Claims And Evidence. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and positive feedback! We see that the main concern is regarding the necessity of the continuity axiom, so let us elaborate on this subject. **Continuity axiom** First, it is natural to assume that minor changes in object locations should lead to small changes in diversity: this is important for interpretable comparison of diversity values across datasets. Second, we believe that this property is critical since without assuming it one may come up with a range of measures that do satisfy the remaining two properties while being intuitively not useful for measuring diversity. The reasoning below extends the example in Appendix A. Consider any measure $M$ that is monotone (e.g., Average). Apply any order-preserving transformation to $M$ such that the range of the resulting measure $M’$ is in $[0,1)$. E.g., if $M$ takes only non-negative values, we can take $M’(X):=1-e^{M(X)}$. Then, the measure $Unique(X) + M’(X)$ has both uniqueness and monotonicity. Essentially, $Unique(X) + M’(X)$ compares two configurations of points in the following way: - Count the number of unique points in both configurations; - If the numbers of unique points are different, then the configuration with a bigger number is more diverse; - If the numbers of unique points are the same, compare configurations based on the measure $M$ (or, equivalently, $M’$). We argue that $Unique(X) + M’(X)$ is (in many cases) not a good diversity measure. For this, consider any measure $M$ that has the monotonicity property (for instance Average), optimize it, and after that spread the points a bit to make them unique. Then, we get the (nearly) optimal configuration for $Unique(X) + M’(X)$ which is very similar to the optimal configuration for $M(X)$. However, the optimal solution for $M(X)$ may be not diverse, as we show with the example for Average on page 3. The discreteness of $Unique(X)$ plays a crucial role in the construction above. A natural way to prevent the measures of the form $Unique(X) + M’(X)$ form being considered as a good diversity measure is to require that diversity measures must be continuous. We will add the above reasoning to Appendix A. Regarding the visual example in Appendix A, we note that the penalty of a duplicate is higher than any increase in diversity that can be gained from moving already unique points further away from each other. We claim that all continuous monotone diversity measures do not show such behavior. Let us formally prove this. The measure from Appendix A rate any configuration of $16$ unique points as more diverse than any configuration of $15$ unique points and $1$ duplicate. Suppose that a continuous measure $M$ does the same. For $a \ge 0$ denote by $X_a$ the set of $16$ points with pairwise distances $a.$ For $a > 0$ denote by $Y_a$ the set of $16$ points from which $15$ have pairwise distances $a$ and one point is a duplicate. Let $r = M(Y_2)-M(Y_1)$, note that $r>0$ by monotonicity of $M$. By continuity of $M$ we can find $1>\epsilon >0$ such that $(M(X_\epsilon) - M(X_0)) < r/10$ and $(M(Y_\epsilon) - M(X_0)) < r/10$. Then $M(X_\epsilon)$ and $M(Y_\epsilon)$ differ by at most $r/5.$ At the same time, $M(Y_1) > M(Y_\epsilon)$ (since $\epsilon<1$) and $M(Y_2)-M(Y_1) =r,$ thus $M(Y_2)$ is bigger than $M(X_\epsilon)$ by at least $4r/5>0$. This is a contradiction since we assumed that $M(X_a)>M(Y_b)$ for all $a>0, b>0.$ > There has been a trend in methods that use human feedback/judgment to learn a high-dimensional distance measure [1] or abstract diversity metrics [2]. Given that the authors touched human intuition on diversity, a discussion on these related work would be helpful to understand how the axioms and challenges apply to more practical settings such as GenAI. Thank you for the references, we will extend the related work accordingly. The mentioned papers use human feedback/judgment to estimate pairwise distances between the objects. A good distance measure is a critical ingredient of a reliable diversity measure. In our work, we assume that pairwise distances are given and analyze different ways of aggregating these values into diversity. We hope that our response addresses your concerns. We are open to further discussions!
Summary: This paper discusses metrics for measuring diversity in various applications. The authors review existing diversity measures and highlight their limitations in corner cases. They propose three key properties—monotonicity, uniqueness, and continuity—that a reliable diversity measure should possess. The paper demonstrates that no existing measure satisfies all three properties, making them unsuitable for accurate diversity quantification. While the authors construct two examples of measures that meet these properties, they, too happen to be NP-Hard, and the paper concludes by posing the challenge of creating a feasible diversity measure. ## update after rebuttal I've added follow-up comments for the authors, and I sincerely hope they'll resolve them in the next version of the manuscript. I have also raised score by a point. Claims And Evidence: Given the nature of this paper, the provided claims and evidence are clear. However, I think there are a few weakness 1. The paper is quite similar to Velikonivtsev et al. 2024, which also talks about the first two axioms and many of the mentioned previous works. Continuity being the new axiom, I am not fully convinced with the example that it is important (mentioned in Appendix A) in particular, the entire argument of diversity is using the intuitive notion. For instance, I might consider that the Fig on right in Appendix A less diverse because of even a single duplicate (recall that you're changing two things at a same time, spread and adding duplicate). Hence is no reason to believe that this intuitive notion of diversity will result in practical improvement for whatever downstream task is considered. 2. Can authors cite and discuss about the following work - "Position: Measure Dataset Diversity, Don't Just Claim It". ICML'24 3. Axioms could've been explained formally using math. That is, this work should have defined the exact notation for the diversity function, what are the arguments (subspace of all matrices defined on field R, multi set of examples), how is monotonicity mathematically defined for this multivariable function. 4. This paper misses on connecting the works with the vast literature on submodular functions, which are known to capture diversity, and appear in many areas of science. See -- "Submodularity In Machine Learning and Artificial Intelligence" for a gentle introduction. 5. In most of the practical situations one is interested in coming up with a subset of the data starting from no datapoints. This is something I am not sure how the current framework tackles (a discussion is missing) Methods And Evaluation Criteria: 1. From the point of view of picking out corner cases for existing measures, 2D examples are great. However, they still are toy examples at the core. Often, people are interested in finding a diverse subset of a given giant dataset; hence, do these corner cases (particularly for the determinant-based examples) occur regularly? 2. Unfortunately, the proposed method is also NP-hard. While the work says that it is quite difficult to satisfy all three axioms, it would've been nice to discuss an existing metric that doesn't succumb to corner cases in practical situations. Theoretical Claims: I've gone through the proofs provided in the appendix. Experimental Designs Or Analyses: While 2D experiments are good to demonstrate, proposed method is not practical (even hard to compute, let alone optimize), therefore any empirical demonstration is not possible. Supplementary Material: N/A Relation To Broader Scientific Literature: The work lacks discussion with the entire area of submodular functions, which are at the core of measuring data diversity. Essential References Not Discussed: 1. "Position: Measure Dataset Diversity, Don't Just Claim It". ICML'24 2. "Submodularity In Machine Learning and Artificial Intelligence" Other Strengths And Weaknesses: Refer to the claims and evidence Other Comments Or Suggestions: Refer to the claims and evidence Questions For Authors: Refer to the claims and evidence Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the feedback and suggestions! We address the concerns below and will be happy to discuss any of the raised issues further. > Continuity being the new axiom, I am not fully convinced with the example that it is important (mentioned in Appendix A) Regarding the importance of the continuity axiom, we first note that it is natural to assume that minor changes in object locations should lead to small changes in diversity: this is important for interpretable comparison of diversity values across datasets. Also, please see our response to **Reviewer Pyqh (Continuity axiom).** Also, we modify the definitions of both Monotonicity and Uniqueness. This modification is important (see lines 244-250 of Section 4.2) and makes developing a suitable diversity measure more challenging. > Can authors cite and discuss about the following work - "Position: Measure Dataset Diversity, Don't Just Claim It". ICML'24 Thank you for the reference. This paper gives motivation on why measuring diversity is important and argues that it is critical to provide a clear definition of diversity when analyzing datasets. We will cite this paper in the introduction. > Axioms could've been explained formally using math. In the paper, we opted for more intuitive definitions and will add formal versions in the revised text. As defined in the paper, let $D_n$ be a subset of all $n \times n$ matrices satisfying the properties in lines 247-251. There is a natural action of the symmetric group $S_n$ on this subset (which simultaneously rearranges rows and columns). Diversity function is any $S_n$-invariant function from $D_n$ to $\mathbb{R}$. **Axiom 1 (Monotonicity).** For any two matrices $A, B \in D_n$ such that $\forall i,j: a_{ij} \ge b_{ij}$ and at least one of inequalities is strict, we have $\mathrm{Diversity}(A) > \mathrm{Diversity}(B)$. **Axiom 2 (Uniqueness).** Suppose that for $A,B \in D_n$: - $a_{ij}=b_{ij}$ for all $i>2, j>2$ - $b_{1i}=b_{2i}$ for all $i$ - $a_{2j}=b_{2j}$ for all $j>2$ - $a_{1j}>0$ for all $j>1$ Then $\mathrm{Diversity}(A) > \mathrm{Diversity}(B)$. **Axiom 3 (Continuity).** The space $D_n$ is endowed with the subspace topology induced by the standard topology on the set of all $n \times n$ matrices. A diversity function must be continuous w.r.t. this topology. > This paper misses on connecting the works with the vast literature on submodular functions, which are known to capture diversity, and appear in many areas of science. Thank you for the reference, we will add a discussion on submodular functions in the updated version. Submodular functions indeed can be used to capture diversity and there is a condition that a submodular function should satisfy that captures an intuitive behavior of diversity/coverage as some elements are added to a dataset (note that in our work we consider properties that hold when the number of elements is fixed). Usually, such functions are used as a diversity measure of a set of objects $X$, when the set of all possible objects $V$ is known. Thus, some known submodular functions use summation (or integration) over the set $V$ (for instance, see the facility location function). Our setting is more general: we want to measure diversity of $X$ without any information about the bigger space $V$, in particular we do not assume that we can sum over $V$. This restriction is reasonable in many cases: if $X$ is a set of several graphs or images and $V$ is a set of all graphs or all images, then it is infeasible to sum over $V$. We are not aware of submodular functions that can be applied to our setup while being not equivalent to one of the measures in Table 1. If there is a particular function that you believe should be added to our analysis — we are happy to extend our work. > In most of the practical situations one is interested in coming up with a subset of the data starting from no datapoints. Thank you for the comment! Although our properties assume that the number of elements in a dataset is fixed, they can still be applied within iterative strategies when one adds elements successively to optimize diversity. See, e.g., the greedy algorithm in Velikonivtsev et al. (2024). Other algorithms may operate with a dataset of fixed size: we may start with a random set of objects and then use, e.g., a genetic approach (or other optimization approaches) targeting at optimizing diversity. > do these corner cases (particularly for the determinant-based examples) occur regularly? > While the work says that it is quite difficult to satisfy all three axioms, it would've been nice to discuss an existing metric that doesn't succumb to corner cases in practical situations. Please, see the first two comments in our reply to **Reviewer zmT9**. Here we provide an additional experiment and also discuss when existing measures can be used despite not having all the properties. We hope that our response addresses the raised issues. --- Rebuttal Comment 1.1: Comment: # Edit for the Authors after their comment The latest (and biggest class) DSPNs (Bhatt, Das, and Bilmes'24) should be a better citation/discussion (this, by the way, doesn't mean you shouldn't cite the other mentioned papers, they're equally important). This class also includes how it can represent a facility location function (in the Bilmes and Bai'17). I'd argue that if this can't be put in the "distance" based diversity framework proposed in the paper, then it means it deserves to be discussed as a limitation of this framework. Moreover, it should also be addressed as a potential method that can be a way around the NP-Hard measures. One may be able to find a modular function (or a set of modular functions) and a concave function (or a set of concave functions) that can be made to fit well for the proposed axioms (continuity can be easily satisfied in my opinion here, however). ## Earlier Rebuttal Response I thank the authors for adding the formal definitions. While one can always state things informally, formal definitions are very important. ## Follow-up Questions - I feel that in some cases diversity is very task-dependent. For instance, consider points on a hypersphere; should the diversity increase if some factor changes the radius of the hypersphere? I don't think that should always be the case (say if classification only depends on radial angle) - From your reply to zmT9 -- "We see that optimizing Average (portrait_genetic_optimizing_avg.pdf) leads to more similar graph structures that tend to be either too dense or too sparse, which agrees with our observations about corner cases for Average in Section 2" -- can you please point me to how exactly this corner case is present in the average one, or in other words, which graph is the corner case that energy metric is not succumbing to? In general, I feel one can draw the same graphs in many different ways that might look very different (illusion of the eyes). - The mentioned greedy procedure to find a set with a high diversity value in general may not have any theoretical guarantee. Submodular functions, on the other hand, do admit a guarantee on maximization under well-behaved constraints (matroid rank, say) ## On Submodular Functions - Submodular Functions always satisfy the diminishing returns property, by definition. - There exists a fairly large class of submodular functions, instantiated using features that do not need a ground set of items to be instantiated. For example, Deep Submodular Functions (Dolhansky and Bilmes'16, Bilmes and Bai'17) and its superclass Deep Submodular Peripteral Networks (Bhatt, Das and Bilmes'24), which is conjectured to represent all monotone non-decreasing normalized submodular functions. - A discussion on all of the above would be good to have. ## References - Dolhansky and Bilmes'16: Deep Submodular Functions: Definitions and Learning (NeurIPS'16) - Bilmes and Bai'17: Deep Submodular Functions (https://arxiv.org/abs/1701.08939) - Bhatt, Das and Bilmes'24: Deep Submodular Peripteral Networks (NeurIPS'24) --- Reply to Comment 1.1.1: Comment: Thank you for your involvement in the discussion! We reply to the additional questions below. **Q: I feel that in some cases diversity is very task-dependent. For instance, consider points on a hypersphere; should the diversity increase if some factor changes the radius of the hypersphere? I don't think that should always be the case (say if classification only depends on radial angle)** Indeed, the notion of diversity can be task-dependent. In the example with a hypersphere, it is natural to choose the distance function accordingly. For instance, if classification only depends on the radial angle, one can choose the angular distance. Then, diversity would not change when we change the radius of the hypersphere. On the other hand, if we choose the Euclidean distance, then varying the hypersphere radius would change the diversity. The choice of a proper distance measure is very important. However, in this study, we assume that a distance measure suitable for a particular problem is already chosen. This allows us to keep our study task-agnostic and focus on properties that are desirable for general diversity measures. **Q: From your reply to zmT9 … can you please point me to how exactly this corner case is present in the average one, or in other words, which graph is the corner case that energy metric is not succumbing to?** We expected from our intuition and the synthetic example that Average may tend to create duplicates, especially of some “extreme” elements. In the example with generated graphs, we see that there are 9 complete graphs (all node degrees equal 15). Note that the order of graphs corresponds to decreasing density, so complete graphs are the first 9 graphs on the figure. Also, there are 7 isomorphic sparse star-shaped graphs (one central node is connected to all other nodes). Overall, for Average, among 100 generated graphs there are only 25 non-isomorphic ones. In contrast, all graphs produced by Energy are non-isomorphic, among which there is one complete graph and one empty graph. **Q: The mentioned greedy procedure to find a set with a high diversity value in general may not have any theoretical guarantee. Submodular functions, on the other hand, do admit a guarantee on maximization under well-behaved constraints (matroid rank, say).** We agree that when we limit the desirable properties to a fixed number of elements, there are no guarantees for greedy methods. Thus, other procedures can be preferred, like genetic approaches or local optimization methods. We think that it would be great to extend the list of axioms to varying dataset sizes. This can be done by adding more axioms to the current list. However, we noticed that even for the fixed size of the set, it is already extremely challenging to satisfy the desirable properties, thus we leave extending the list of axioms to future studies. **Q: There exists a fairly large class of submodular functions, instantiated using features that do not need a ground set of items to be instantiated. For example, Deep Submodular Functions (Dolhansky and Bilmes'16, Bilmes and Bai'17) and its superclass Deep Submodular Peripteral Networks (Bhatt, Das and Bilmes'24), which is conjectured to represent all monotone non-decreasing normalized submodular functions** Thank you for the references! We plan to include a discussion of submodular functions and their relation to our work in the revised version of the paper. Let us briefly discuss the referenced works. For this, let us cite Dolhansky et al. (2016): > Feature-based functions take the form $f(X) = \sum_{u \in U} w_u \phi_u\left(m_u(X)\right)$, where $\phi_u$ is a non-decreasing, non-negative, univariate, normalized concave function, $m_u(X)$ is a feature-specific non-negative modular function, and $w_u$ is a non-negative feature weight. The result is the class of feature-based submodular functions (instances of SCMs). (here $m_u: V \to \mathbb{R}$ is a non-negative modular function and $m_u(X):= \sum_{x \in X} m_u(x)$) > Another advantage of such functions is that they do not require the construction of a pairwise graph and therefore do not have quadratic cost as would, say a facility location function ... or any function based on pair-wise distances, all of which have cost $\mathcal{O}(n^2)$ to evaluate. As far as we understand, feature-based submodular functions cannot be directly applied if we want to express diversity of a set as a function of pairwise distances. Thus, we cannot analyze them in our framework. The more complex deep submodular functions are constructed in layered manner similar to neural networks, when zero layer consist of several feature-based submodular functions. Thus, deep submodular functions also do not operate with pairwise distances and cannot be analyzed within our framework. We would be happy to engage in further discussions to properly address the suggestions. Unfortunately, we are not able to post more comments, we can only edit this one.
null
null
null
null
null
null
Since Faithfulness Fails: The Performance Limits of Neural Causal Discovery
Accept (poster)
Summary: The paper benchmarks several representative neural causal discovery methods in a coherent and charitable way, revealing consistent shortcomings. These are attributed to faithfulness violations, even in large sample sizes and small graphs, suggesting a more fundamental flaw in the neural causal discovery paradigm. Claims And Evidence: I generally found the claims clearly stated and well-supported, with two important exceptions: 1. it's claimed that neural methods can't detect absence/existence of causal relationships, but it's not clear to me that randomly initialized NNs are guaranteed to have 'strong enough' influence; in linear simulations, it's common to sample weights $\mathrm{abs}(w) > 0.5$, but it's not if clear anything similar was done here 2. NNs are universal function approximators, so it's unclear to me how a different approximator is going to solve the problem; rather seems to me different objective functions are needed---and the paper doesn't give evidence that it's specifically a NN problem, disentangled from the similar objective functions the neural approaches are using. And some less important exceptions: 3. In Section 5: "These parameter choices align with commonly studied medium-sized graphs in causal discovery research". I wouldn't say 5- and 10-node graphs are medium-sized, and expected degrees of 1 and 2 are quite sparse. Methods And Evaluation Criteria: Yes, these all seem reasonable. Theoretical Claims: N/a Experimental Designs Or Analyses: Yes, these all seem reasonable, other than related to points 1 and 3 in _Claims and evidence_ above. Supplementary Material: Yes, I looked through it all. Relation To Broader Scientific Literature: The findings here more systematically and rigorously support and relate previous findings concerning $\lambda$-faithfulness and poor performance of neural causal discovery methods. The paper doesn't really make claims about non-neural causal discovery, but it would interesting to see at least some standard/state of the art non-neural methods included for comparison, such as kernel PC or GRaSP. Essential References Not Discussed: Nothing missing that I'm aware of. Other Strengths And Weaknesses: Already covered. Other Comments Or Suggestions: - second paragraph in intro: should be "ground-truth" instead of "ground-though" - end of paragraph after (5): should be "bridge the gap" instead of "breach..." - in Section 3: should be "Synthetic" not "Synthetics" - missing label in caption of Figure 1(b), so it appears as ?? - __conclusion__ before Section 4: should be "that" instead of "hat" - Section 4.2: (twice) should be "faithful" instead of "faithfull" - Line 747: "TODO cite pearl?." Questions For Authors: 1. Have the authors tried using linear or NN simulations with stronger relations, like mentioned in __Claims and evidence__ above? 2. Likewise, do the authors have an argument against neural approaches specifically or for any other particular approaches, independent of the objective functions used? 3. Have the authors tried comparing non-neural methods? Do they fare any better? 4. In __Conclusion__ before Section 4, what does "the number of equivalent graphs will reach 0" mean? Satisfactory answers to the first 3 questions would improve my overall recommendation. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the Reviewer's positive feedback and thoughtful assessment of our work. We are glad that our benchmarking of representative neural causal discovery methods was recognized as both coherent and charitable, providing a clear and systematic evaluation of their limitations. We are also pleased that our claims were found to be well-supported. Moreover, we are grateful for the acknowledgment that our findings rigorously build upon and relate to prior work. To clarify our claims (addressing feedback from *Claims And Evidence* section): 1. We claim that neural networks cannot reliably detect the absence or existence of causal relationships given practically available data volumes. The strength of causal relationships plays a crucial role; we quantify it using the notion of λ-faithfulness. We show that the required sample size will be infeasible in practical scenarios. We thank the Reviewer for suggesting the additional experiment with artificially ‘strong influence’. For the linear case, such an analysis was performed in [Uhler2013] (see Figure 5 in their paper). They show that increasing the strength does not change the overall picture. This might seem counterintuitive at first. However, increasing the direct links does not eliminate the exponential vanishing of λ-faithful distributions due to the cancellation of paths. We highlight this information in the revised version of the paper. Moreover, we run an analogous simple experiment for the non-linear case. Namely, during data generation, neural network weights were initialized with values spaced from zero by a factor c∈{0.0,0.5,1.0}. Importantly (and akin to the linear case), this did not result in observable differences in the distribution of the λ-property, see Figure R.2 in the [rebuttal material](https://drive.google.com/file/d/1m7rFfvf2_xoQorCprk6RE1MiITmYivhy/view?usp=sharing) 2. The reviewer is completely right. Using different approximators would unlikely solve the problem, and indeed, novel score functions or an alternative causal discovery objective are needed. We are sorry for the confusion. We have improved the description so that it is now explicitly stated. During the rebuttal, we provide another piece of evidence by using kernel-based PC (i.e., a non-NN approximator), observing that the slow convergence phenomenon is present. We include this result in Figure R.3 in the [rebuttal material](https://drive.google.com/file/d/1m7rFfvf2_xoQorCprk6RE1MiITmYivhy/view?usp=sharing) 3. We understand the reviewer's concerns regarding the size of the evaluation graphs. We will revise the description to refer to “small and medium-sized graphs”. (We also note that in Section 5, we use graphs with 30 nodes.) As for the questions: 1. Please see our response to Claims and Evidence 1. 2. Please see our response to Claims and Evidence 2. 3. During the rebuttal, we compared kernel-based PC to our algorithm introduced in Section 3. As expected, we observed comparable performance. 4. The referenced sentence pertains to the experiment in Section 3.1. Our intended meaning is that the number of structures outside the MEC class that receive statistically equivalent scores decreases slowly as the number of samples increases. We will clarify this in the revised version of the paper. Again, we thank the reviewer for the constructive criticism. The new version of the manuscript includes the textual improvements announced above and several other small amendments. Moreover, it includes the new experimental results. We’d be happy to address any additional questions or concerns. [Uhler2013] Uhler, Caroline, et al. "Geometry of the faithfulness assumption in causal inference." The Annals of Statistics (2013): 436-463 --- Rebuttal Comment 1.1: Comment: The rebuttal addresses most of my concerns, so I increase my overall recommendation from 3 to 4. As a final comment, I suggest the authors try to phrase some of the claims of the paper more carefully, paying special attention to whether each claim/evidence concerns neural networks, (penalized likelihood-based) score functions, or the intersection of the two. For example, do the conclusions drawn from Figure 1(a) hold across all values of $\gamma$? And how does it compare to using the MLE in the linear gaussian (or other parametric) setting? I wonder if some of the claims in the paper should actually be about penalized likelihood-based scores (which many NN methods use), rather than NNs themselves. Another interesting comparison in this vein would be to see if differentiable ANM methods (which use a different score, but still use NNs) also succumb to $\lambda$-unfaithful samples. Overall, I think the paper is very thought-provoking and contributes a valuable, critical perspective among the growing number of neuro-causal methods. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the positive feedback, the increased score, and the kind comments regarding the significance and impact of our work. We also appreciate the suggestion to clarify the claims, which we will carefully consider as we prepare the next version of the paper. We thank the reviewer for suggesting extending the analysis. Exploring other MLE-based methods and recent differentiable ANM approaches is indeed an interesting and important direction for future work, and we expect that our results will continue to hold in these settings.
Summary: This paper claims that while neural causal discovery methods have become more scalable, they fundamentally struggle with accuracy when identifying true causal relationships. Neural networks are unable to reliably distinguish between real and non-existent causal connections in finite samples, and violations of the faithfulness property—which occur frequently in practice—significantly undermine their performance. The authors conclude that these limitations require a paradigm shift in the field of neural causal discovery. Claims And Evidence: I feel that I don't understand section 3. It seems like section 3 shows that as sample size increases, the neural network does better and better at identifying the structure. Then the conclusion claims that methods can't identify structure consistently. The prose directly above that conclusion states that larger samples enables identification of structures in more difficult datasets. How can I reconcile these two things? The metrics of lamba and lambda hat make sense to me, and its cool that these measures correlate with the number of samples needed for convergence. Methods And Evaluation Criteria: I am not familiar with the state of the art methods in neural causal discovery, so I can't say whether the methods they chose are an appropriate spread of the state-of-the-art models. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design of using neural networks to generate a dataset and then training neural networks to perform causal discovery on that dataset seems reasonable to me if the goal is to problematize their ability to do causal discovery. Supplementary Material: I looked at the neural network details, which should really be provided in the main text as that is the core of the whole paper! Relation To Broader Scientific Literature: I think understanding how to leverage neural networks for causal discovery is broadly of huge importance, and showing their limitations can be very helpful in that process. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I feel like I might be missing the main point of this paper. Section 5 has prose that references section 3 as if section 3 showed that neural network models don't scale with the number of examples provided in training, but that is not what section 3 says at all. It says that neural models get better the more data that is provided. Could you explain how Figure 1b and Figure 3b are saying the same thing? From my perspective, they are showing very different results. Separately, if the whole point of this paper is arguing for a paradigm shift in causal discovery away from neural networks, how can this whole paper only use artificial datasets? I don't think a field should progress by people constructing difficult datasets that show methods fail, shouldn't we be grounded out in some sort of real world dataset or phenomena? "It may hold, though is highly unprobable, that real-world distributions adhere to λ-strong faithfulness despite large sizes of the graph. Further investigation is required" This quote from the end of section 6 is really important to me. I think making a technical point about NCD methods and where they struggle is totally a reasonable and good thing to do. However, if you want to argue that the field needs a paradigm shift, I feel you must ground out this claim in an evaluation of the real world data the field someday wants to model. Without an argument that the assumptions you made in the paper will hold in real-world setting, I feel I have to reject this paper based on how ambitious and far-reaching the introduction, title, and abstract are. Other Comments Or Suggestions: 245 "hat the number" -> "that the number" Figure 1 caption has Section ?? line 290 Faithfull Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the Reviewer's thoughtful comments and acknowledgment of our work's importance in understanding neural causal discovery limitations. We admit shortcomings in the presentation, including those pointed out by the Reviewer. We have made a substantial effort to improve the quality. Below, we clarify the specific concerns. To be on the safe side, we start with a short general summary. Causal graph discovery has traditionally been framed as a discrete combinatorial problem. More recently, neural network-based continuous optimization methods have been introduced, offering scalability and computational efficiency. Our study reveals the key limitations of such approaches: likelihood-based methods that utilize neural networks struggle to distinguish between true and spurious causal connections when using realistically available data volumes. We show that the problem quickly exacerbates with the graph size. From this, we draw our key takeaway: the casual discovery community needs to search for alternative objective functions. With this in mind, we now address the Reviewer's specific concerns. **Regarding results from sec. 3.** The Reviewer points out a discrepancy between the improvement reported in Section 3 and our conclusion about the fundamental limitations. The latter stems from the observation, which is key to the whole paper, that even using substantial amounts of data (e.g., 8,000 samples), our idealized method fails to identify even small causal structures (5 nodes). Subsequently, we have confirmed that the problem persists even with 80,000 samples (see Figure R.3 in the [rebuttal material](https://drive.google.com/file/d/1m7rFfvf2_xoQorCprk6RE1MiITmYivhy/view?usp=sharing)) Moreover, the results shown in Sec. 4 discuss how distributions associated with larger graphs quickly become highly complex, leading to a sharp increase in data requirements. Together, these results show that the structure identification of large graphs requires unrealistically large datasets. On the conceptual level, the difficulty of discovery (i.e., the number of required samples) is highly correlated with $\lambda^{-1}$ from the $\lambda$-faithfulness notion. In Section 4, we show that $\lambda$ is typically small and decreases with the size of the graphs. We will provide an improved conclusion paragraph that clearly states the above in the next revision of the paper. **Regarding sections 3 and 5 showing the same thing.** Both graphs convey the same message: that casual discovery is not achievable using a practically available data regime. Sec 3 & 4 (Fig. 1b) show that large graphs require impractically large datasets (as discussed above), even when using idealized methods. Sec 5 (Fig. 3b) reinforces these findings, specifically, when using a practical method, we observe slow (or lack of) convergence. We hope this clarifies the issue and will revise Sec. 5 accordingly. Please let us know if further questions arise. **Regarding the usage of synthetic datasets.** We acknowledge concerns about our reliance on synthetic datasets, a necessity due to the absence of large, real-world datasets annotated with ground-truth causal graphs. However, we follow standard causal discovery practices, using widely accepted synthetic benchmarks that are randomly generated rather than adversarially designed, as in prior works. Thus, we believe the results are generalizable to real-world datasets. In our rebuttal, we strengthen our results by providing lambda statistics for additional graph structures (scale-free, small-world, and bipartite), ensuring broader coverage of realistic scenarios, see Figure R.1 in the [rebuttal material](https://drive.google.com/file/d/1m7rFfvf2_xoQorCprk6RE1MiITmYivhy/view?usp=sharing). Given these points, we believe our methodology is well-justified, but we are open to discussing further refinements if the Reviewer has specific suggestions. *** We sincerely thank the Reviewer for their careful reading and for identifying the typographical errors. We will correct these and incorporate the requested neural network details in the final version. We hope that this rebuttal clarifies the Reviewer's concerns and highlights the significance of our findings. We believe our work provides valuable insights into the limitations of current neural causal discovery methods and motivates the need for alternative objective functions. In light of these clarifications, we respectfully ask the Reviewer to reconsider their recommendation. While we have endeavored to address all questions thoroughly, the character limit required us to keep our responses concise. We would be happy to provide further explanations if needed. --- Rebuttal Comment 1.1: Comment: I won't be changing my score, as I don't really understand how 8k or 80k are impractically large numbers when it comes to datasets. I also don't really understand how you can claim results are generalizable to real world datasets. That being said, I recognize I may be missing some of the main points, and if the other reviewers and area chair agree this paper should be accepted, I'm perfectly happy with that! --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's feedback and positive outlook on our work. We make an effort to address Reviewers' concerns around the claims of the paper. **Regarding data size** Collecting data from real-world causal systems is usually a costly, time-consuming process, for example, including wet lab experiments for the use cases in biology or chemistry. Thus, the community standard is to focus on datasets that might feel small compared to the standard of other ML fields, even in the cases when synthetic data are used. For reference, we present a summary of datasets used in causal discovery for context, using N for nodes and S for samples. [DCDI]: - Sachs: N=11, S~=6000 - synthetic: N = {10, 20}, S=10K [BayesDag]: - synthetic: N=5, S=500; N={30, 50, 70, 100}, S=5K - Syntren (semi-synthetic, simulation): N=20, S=500 - Sachs: S=800 (only observational), N=11 [GraN-Dag] - Appendix A.4, titled “large sample size experiment”: N50, S<=10K - synthetic: N={10, 20, 50, 100}, S=1K [CAM] - Real data: N=39, S=118 - synthetic: N={10, 100, 1000}, S=200 [SDCD]: - synthetic: N={20, 30, 40}, S=10K [DiscoGEN]: - synthetic: N=100, S<=50K [AVICI]: - training: N=(2,50), S=200, - evaluation: N=(2,50), S={30, 100, 300, 1000} [FIP]: - training: N=100, S=200 - evaluation; N=100, S<= 10K When it comes to real-world data, we would like to refer to Table 2 from [Grounding] describing the biological data: | Dataset | Description | Number of interventions | Number of samples | Number of nodes | |---------|-------------|------|-------|------| | Wille et al. (2004) | Gene expression microarray (*A. thaliana*) | 1 | 118 | 39 | | Dixit et al. (2016) | Perturb-seq (bone marrow-derived dendritic cells) | 8 | 14427 | 24 | | Replogle et al. (2022) | Perturb-seq (cell line K562) | 1158 | 310385 | 8552 | | Replogle et al. (2022) | Perturb-seq (cell line RPE1) | 651 | 247914 | 8833 | | Frangieh et al. (2021) | Perturb-CITE-seq (melanoma cells) | 249 | 218331 | 1000 | | Sachs et al. (2005) | Flow cytometry (CD4+ T cells) | 6 | 5846 | 11 | Even though the largest dataset contains ~310k samples, we emphasize that the ratio of dataset size to graph size is significantly worse in their case compared to ours (we use 80k samples with a graph size of 5, whereas their graph size is 8k). Moreover, the difficulty of the causal discovery problem increases rapidly with graph size, at least when measured using the proxy of lambda, as indicated in Section 4. Given these factors, we strongly believe our results support the conclusion that 'causal discovery is impossible in the listed cases under the current paradigm’. At the same time, we have partial results for a dataset of 800k samples. Although these results lack statistical power, they are fully consistent with our claims. We commit to expanding this part in the camera-ready version. **Regarding real-world datasets** We acknowledge that real-world datasets often exhibit complexities that synthetic datasets may not fully capture, such as measurement noise, latent confounding, or domain-specific constraints. In our study, we focus on a fundamental property of distributions induced by causal graphs—specifically, the cancellation of paths phenomenon (see [StrongFaith]). This property has been previously analyzed in linear systems, and we extend this evaluation to nonlinear (NN-based) functions. While real-world datasets may differ from synthetic ones in various ways, this phenomenon is not tied to a specific functional class, noise type, or error but rather emerges from the structural properties of the graph itself. Therefore, we argue that it is likely to be relevant in many real-world settings. That said, we acknowledge that empirical validation on real-world datasets is essential for assessing the practical impact of these findings. **References** [DCDI] BROUILLARD, Philippe, et al. Differentiable causal discovery from interventional data. NeurIPS 2020. [BayesDag]ANNADANI, Yashas, et al. Bayesdag: Gradient-based posterior inference for causal discovery. NeurIPS 2023. [GraN-Dag] LACHAPELLE, Sébastien, et al. Gradient-based neural dag learning. arXiv preprint, 2019. [CAM] BÜHLMANN, Peter; PETERS, Jonas; ERNEST, Jan. CAM: Causal additive models, high-dimensional order search and penalized regression. 2014. [SDCD] NAZARET, Achille, et al. Stable differentiable causal discovery. ICML 2024. [DiscoGEN] KE, N. R., et al. DiscoGen: Learning to Discover Gene Regulatory Networks. 2023. [AVICI] LORCH, Lars, et al. Amortized inference for causal structure learning. NeurIPS 2022. [FIP] SCETBON, Meyer, et al. A Fixed-Point Approach for Causal Generative Modeling. ICML 2024. [Grounding] BROUILLARD, Philippe, et al. The Landscape of Causal Discovery Data: Grounding Causal Discovery in Real-World Applications. arXiv preprint, 2024. [StrongFaith] ZHANG, Jiji; SPIRTES, Peter. Strong faithfulness and uniform consistency in causal inference. UAI 2002.
Summary: This work critically examines the limitations of neural causal discovery methods, revealing their fundamental inability to distinguish causal relationships in finite-sample regimes reliably. Through a systematic benchmarking protocol, the authors demonstrate that even state-of-the-art neural approaches struggle to recover ground-truth causal structures, attributing this failure to estimation errors and the violation of the faithfulness assumption. The study quantifies the difficulty of causal discovery using the λ-strong faithfulness property and shows that as graph size increases, the proportion of faithful distributions decreases exponentially, fundamentally constraining current methods. Claims And Evidence: The paper provides strong empirical evidence to support its claims, using a rigorous controlled experiments. The claim that neural causal discovery methods struggle to recover ground-truth structures is well-supported by systematic evaluations across multiple datasets and methods. Additionally, the argument that the faithfulness assumption is a key bottleneck is backed by quantitative analysis of λ-strong faithfulness. Methods And Evaluation Criteria: The paper introduces a systematic benchmarking framework that standardizes datasets, hyperparameter tuning, and functional approximations, ensuring robust and fair comparisons across methods. The use of synthetic datasets with known ground-truth causal structures is an appropriate choice for evaluating structural recovery accuracy, while the incorporation of the λ-strong faithfulness property provides a theoretically grounded measure of dataset difficulty. This approach effectively demonstrates the limitations of existing neural methods. However, the inclusion of real-world datasets would further strengthen the study by assessing whether these challenges persist in practical applications. Theoretical Claims: This work primarily focuses on empirical contributions rather than formal theoretical proofs. Experimental Designs Or Analyses: - As mentioned previously, this work focuses only on synthetic data, while recommending others, in the discussion section, to use real-world datasets. - The graphs analyzed in this work were generated only using the Erdos-Renyi model. It would be insightful to know whether the results generalizes to other graph classes such as scale-free, small-world, etc. - Additionally, authors can perform ablation studies to better understand how different neural network architectures (depth, width, activation functions) affect causal graph recovery. Supplementary Material: None Relation To Broader Scientific Literature: While neural methods aim to improve scalability, the study empirically shows they struggle under finite samples due to faithfulness violations. By quantifying the impact of λ-strong faithfulness, the paper highlights fundamental constraints and the need for alternative causal discovery paradigms. Essential References Not Discussed: None Other Strengths And Weaknesses: The paper is generally well-written, aside from the following typos. Other Comments Or Suggestions: Please correct the following typos: - Section 1 (line 48): unfaithfull -> unfaithful - Section 2, the line “An SCM defines a joint distibution P over the set of random vairables {Xi}” is repeated twice. - Multiple occurrences of “distibution” instead of “distribution.” - Figure 1, last sentence has incorrect section reference. - Title of section 3 and 3.1 are same. - Section 3.2 (line 245): hat -> that - Section 4 (line 258) -> ot -> to Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thoughtful feedback and their positive evaluation of our work. We are especially grateful for the recognition of the strong empirical evidence supporting our claims and rigorous controlled experiments that validate our approach. It is encouraging to hear that our systematic evaluations effectively support our findings and that our methodology successfully highlights the limitations of existing neural methods. Additionally, we appreciate the acknowledgment that the use of synthetic datasets with known ground-truth causal structures is an appropriate choice. Our goal was to ensure a clear and interpretable evaluation, and we are pleased that the reviewer found this aspect of our work well justified. Finally, we are glad that the reviewer found the paper to be generally well-written, and we thank them for their time and effort in assessing our work. Regarding suggestions from *Experimental Designs Or Analyses*: 1. We agree that evaluations on real-world graphs are vital for monitoring progress in causal discovery. However, there is a very limited amount of real-world data annotated with ground-truth graphs. Thus, the only viable way to conduct a comprehensive, large-scale analysis is to resort to synthetic data. Moreover, we follow the standard practice in causal discovery research by using widely accepted synthetic benchmarks. These datasets are not designed adversarially but are generated randomly. Thus, we believe the results are generalizable to real-world datasets. 2. Further, during the rebuttal phase, we conducted additional analyses of the lambda-faithfulness property for other types of graphs (scale-free, small-world, and bipartite). The results align with the observations on ER graphs, see Figure R.1 in the [rebuttal material](https://drive.google.com/file/d/1m7rFfvf2_xoQorCprk6RE1MiITmYivhy/view?usp=sharing). These results will be added to the final version of the paper. We hope the reviewer finds them interesting and reassuring. 3. During the project, we conducted multiple ablations regarding the neural network architectures. Some of them are described in the appendix. Table 9 in Appendix A analyzes the impact of the network size on the performance of our optimized algorithm introduced in Section 3. In Appendix B, we provide a detailed study of the influence of network architecture on the performance of selected neural causal discovery methods. Notably, we compared architectures with residual connections and with layer norm; see Figure 9 in Appendix B. In all the cases, we found the behavior very similar to the one presented in the main body of the paper. We add a remark stating this. We hope that the above answers resolve the reviewer’s concern. However, we’d be happy to perform additional analysis and add clarifications should the reviewer find that something is still missing.
null
null
null
null
null
null
null
null
GuardAgent: Safeguard LLM Agents via Knowledge-Enabled Reasoning
Accept (poster)
Summary: This paper proposes GuardAgent, a new framework designed to safeguard LLM agents by checking if their actions satisfy specific safety guard requests. GuardAgent has two main steps: 1) analyzes safety guard requests and generates a task plan. 2) Then, convert this plan into the code and execute it. The authors further develop two benchmarks for evaluating GuardAgent: EICU-AC (design for healthcare agents) and Mind2Web-SC (design for web agents). Experiments show that GuardAgent achieves 98% guardrail accuracy on EICU-AC and 83%-90% on Mind2Web-SC across several LLMs (GPT-4, Llama3-70B, and Llama3.1-70B). Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: This work focuses more on experimental design and benchmarking. There is not much Theoretical. But I carefully read all section 4 "GuardAgent Framework". Experimental Designs Or Analyses: Experimental designs are reasonable. The author provides benchmark, dataset and metrics. Supplementary Material: I read all the appendices provided by the authors in PDF. Relation To Broader Scientific Literature: This paper makes a reasonable contribution to the field of LLM-based agent safety. Although this paper only discusses "healthcare agent" and "web agent". Other agents may be able to draw on the experimental design ideas and evaluation criteria in this work. Essential References Not Discussed: [1] TrustAgent: Towards Safe and Trustworthy LLM-based Agents through Agent Constitution. ICML 2024 Workshop TiFA. Other Strengths And Weaknesses: Strengths: 1. Comprehensive system design: GuardAgent involves several necessary LLM-based agent components (planning, code generation, memory). The code-based approach provides more reliable guardrails compared to natural language-based alternatives. 2. Comprehensive experiment results: The authors developed two benchmarks (healthcare and web agent) and evaluated their approach across multiple backbone LLMs, demonstrating consistent performance. The paper also includes thorough comparisons with baseline approaches, including "model guarding agents" (using LLMs with carefully crafted prompts) and "invasive" guardrails (hardcoded into agent systems). 3. Good writing and easy to follow. Weakness: 1. The application scope only focuses on healthcare applications and web navigation. In related work, the author mentioned several LLM-based agent applications, such as finance, autonomous driving, and education. The generalizability of GuardAgent needs to be further explored. 2. GuardAgent’s performance improvement relies on in-context learning and memory components. The paper doesn't explore how the quality of these demonstrations affects performance, though it does show that performance degrades with less relevant demonstrations. 3. While the author mentions a debugging mechanism, the details and effectiveness of this component aren't thoroughly explored. Could the author provide more evidence or discussion of the "debugging" component? Other Comments Or Suggestions: N/A Questions For Authors: See in weakness. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and your positive feedback on our contributions. Please find our detailed responses to your comments below. **W1: Generalizability of GuardAgent needs to be further explored** **A1**: Thank you for the comment. In Appendix P, we have included an application of GuardAgent beyond healthcare and web navigation: common sense reasoning, where GuardAgent outperforms the baseline in predicting the risks of task execution. Here, we add another experiment to demonstrate GuardAgent’s efficacy in safeguarding scientific reasoning tasks. We use the MMLU dataset, which contains questions from 57 different subjects. We divide these subjects into four major categories: Mathematics and Logic, Natural Science, Social Science, and Technology and Engineering. The target agent is designed to answer questions from these categories, with the requirement that the user must possess expertise in the relevant category. Note that this setting simulates access control in Q&A. GuardAgent receives a question along with the user’s expertise. If the question aligns with the user’s area of expertise, the target agent is allowed to respond. GuardAgent achieves a 100% success rate in Label Prediction Accuracy (LPA, measuring the overall correctness of the safeguard) with GPT-4. We will include this experiment in the revised paper. Thank you again for your suggestion. **W2: How the quality of the demonstrations affects performance** **A2**: Thank you for the comment. As you mentioned, the performance of GuardAgent is not significantly degraded with fewer demonstrations. Regarding the quality of the demonstrations, we have included an experiment in Appendix M, where we retrieve demonstrations based on "least-similarity" instead of "max-similarity". As shown in Table 5 of our paper, the safeguard accuracy of GuardAgent drops from 98.7% to 98.1% on EICU-AC and from 90.0% to 84.0% on Mind2Web-SC. The results indicate that reducing the relevance of the retrieved memories only causes moderate degradation in the performance of GuardAgent. Following your suggestion, we present another experiment to further explore the impact of demonstration quality on GuardAgent’s performance. In the default setting, only correct safeguard examples are added to the memory base. Here, we modify the setting by storing all executions GuardAgent indiscriminately. In the table below, we observe that there is a less than 9% drop in absolute safeguard accuracy across all configurations compared with the results in our Table 1. These results highlight the decent robustness of GuardAgent to the quality of the demonstrations. ||EICU-AC|||||Mind2Web-SC||||| |-|-|-|-|-|-|-|-|-|-|-| ||LPA|LPP|LPR|EA|FRA|LPA|LPP|LPR|EA|FRA| |llama3|93.0|100|86.4|86.4|100|81.5|98.5|65.0|64.0|100| |llama3.1|91.5|100|86.8|79.9|100|83.0|92.3|72.0|72.0|100| |gpt-4|90.19|90.2|91.3|87.6|89.6|81.5|89.0|73.0|72.0|100| Please note that in practice, an evaluator (e.g., human feedback) can be employed to determine which agent executions should be added to the memory. Such evaluation ensures the quality of future retrieved demonstrations. Thank you again for the suggestion! **W3: More evidence or discussion of the "debugging" component** **A3**: Thank you for the question. The debugging component prompts the core LLM with the guardrail task, the generated guardrail code, the errors raised from code execution, and a request to regenerate the guardrail code. We have added a detailed discussion of the debugging mechanism in Appendix K. In most cases, debugging is not activated; therefore, we created a more challenging scenario by removing both the toolbox and memory from GuardAgent. Consequently, 29 out of 316 generated codes were not executable initially, including 11 name errors, 3 syntax errors, and 15 type errors. Debugging resolved 9 of these 29 errors, specifically 8 name errors and 1 type error. None of the syntax errors were successfully debugged; they were all caused by the incorrect representation of the newline symbol as '\\\n'. We will include a complete debugging prompt and the corresponding correction of the guardrail code in our revised paper. Thank you for your suggestion. **W4: Missing reference** **A4**: Thank you for bringing this important work to our attention. TrustAgent develops and implements an agent constitution to ensure the safety of actions and tool utilization of the target agent. This work is highly relevant to ours, and we will definitely include a discussion about it in our revised paper. Thank you once again for your valuable feedback. We are happy to answer your follow-up questions if there are any. --- Rebuttal Comment 1.1: Comment: Thanks to the author's rebuttal. It addresses part of my concerns. I have raised my score. --- Reply to Comment 1.1.1: Comment: Thank you for your acknowledgement of our efforts and for raising the score. And thank you again for reviewing our paper.
Summary: This paper proposes a framework that safeguards LLM agents by using an agent (GuardAgent) to check whether their actions comply with safety requirements. GuardAgent uses a two-step process: generating a task plan based on safety requirements, then converting this plan into executable guardrail code. The authors introduce two benchmarks (EICU-AC for healthcare access control and Mind2Web-SC for web safety) and demonstrate effectiveness with 98% and 83% guardrail accuracies respectively. Claims And Evidence: The claims about guardrail accuracy and flexibility are supported by experimental results across multiple LLM backbones. However, the "low operational overhead" claim lacks comprehensive analysis against alternatives. Methods And Evaluation Criteria: The evaluation metrics (LPA, LPP, LPR, EA, FRA) appropriately capture both guardrail effectiveness and impact on target agent functionality. However, the benchmarks are specifically designed for narrow domains, which may not necessarily represent a broad spectrum of real-life situations. Theoretical Claims: The paper doesn't make formal theoretical claims requiring mathematical proofs. The claims are purely empirical. Experimental Designs Or Analyses: Some notable strengths: - The comparison against multiple baselines, including both "model guarding agents" and invasive approaches directly modifying the target agent's prompt. - Comprehensive ablation studies examining the impact of memory size and toolbox components. - Detailed breakdown of performance by role in EICU-AC and rule in Mind2Web-SC to identify potential weaknesses. - Testing with multiple LLM backbones to demonstrate robustness across different models. Limitations: - It does not provide analyses on false refusals where benign actions has been wrongly denied Supplementary Material: Yes - Detailed information about the construction of the EICU-AC and Mind2Web-SC benchmarks. - Complete specifications of inputs and outputs to GuardAgent. - Examples of code generated by GuardAgent and case studies demonstrating its effectiveness. Relation To Broader Scientific Literature: - They distinguish between "model guarding models" approaches (like LlamaGuard) and their "agent guarding agents" approach, clearly explaining why traditional guardrails designed for textual content are insufficient for agent actions. - They acknowledge and build upon existing work on knowledge-enabled reasoning in LLM agents, particularly the use of in-context demonstrations and retrieval mechanisms. - They contextualize their work within the evolving landscape of safety concerns for AI agents, citing relevant literature on potential misuse scenarios. Essential References Not Discussed: - More discussion of competing frameworks for agent guardrails like Langchain's guardrails library would provide better context on the landscape of agent safety approaches. - References to formal verification approaches for LLM outputs, which could complement the code-based guardrails proposed in GuardAgent. Other Strengths And Weaknesses: This paper is looking at an important problem, which is a plus. Other Comments Or Suggestions: N/A Questions For Authors: - The guardagent framework assumes that safety specifications are available, however exhaustive specifications are not always feasible. How would a guardagent perform in that scenario? - Do you observe false refusals and how often? - Is code generation and execution always feasible in a general setting? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and your positive feedback. Please find our responses to your comments below: **W1: The “low operational overhead” claim** **A1**: We apologize for the ambiguity. The “low operational overhead” refers to one of the three key advantages of GuardAgent – it “employs the core LLM by in-context learning, enabling direct utilization of off-the-shelf LLMs without the need for additional training” (line 106-108 left). Therefore, compared with trained guardrails like LlamaGuard, our method has low operational overhead. We will change the statement by directly saying that GuardAgent is free of training. Regarding the empirical time cost, we have included Table 6 in the appendix. Both GuardAgent and the “model-guarding-agent” baseline are free of training – they both achieve the same level of time cost as the target agent. Thank you for pointing this out. **W2: The benchmarks are specifically designed for narrow domains** **A2**: Thank you for the comment. The healthcare domain (and code generation) covered by EICU-AC and the web applications covered by Mind2Web-SC are two important fields for LLM agent applications. In Appendix P, we also curated a dataset from CSQA to evaluate GuardAgent on common knowledge QA with certain rules. In our future work, we will create a complex dataset combining the ones we have proposed in the paper and more data for diverse application domains to better cover a broader spectrum of real-life situations. **W3&Q2: Analyses on false refusals** **A3**: Thank you for your constructive suggestion. In the table below, we show the false refusal rate of GuardAgent compared with the model-guarding-agent baseline. GuardAgent achieves close to zero false refusal rates in all configurations except on Mind2Web-SC with Llama3.1-70B. ||EICU-AC|||Mind2Web-SC||| |-|-|-|-|-|-|-| ||Llama3-70B|Llama3.1-70B|GPT-4|Llama3-70B|Llama3.1-70B|GPT-4| |Baseline|4.5|2.6|5.2|4.0|3.0|0| |GuardAgent|0|0|0|1.0|14.0|0| We examined the false refusal cases for this configuration and found that most false refusals were caused by incorrect associations between the agent's actions and the safety rules. For example, when the target agent is instructed to "Search for running shoes," GuardAgent incorrectly applies the rule for "Search/Find/Show movies/music/video," while the correct rule is for "Shopping." This error likely resulted from the exact word "Search" matching in both the user query and the incorrect rule. We will include these results in our revised paper. Thank you for your suggestion! **W4: More discussion of 1) competing frameworks for agent guardrails and 2) formal verification approaches** **A4**: Thank you for pointing out these important related directions. The agent guardrails, such as the guardrail libraries for LangChain and LlamaIndex, belong to the model-guarding-agent family, where the guardrail models are specifically trained to detect particular categories of harmfulness. Formal verification is another important type of approach to ensure the safety of LLM outputs. For example, C-RAG provides theoretical guarantees on the generation risk of LLMs with RAG [1]; FVEL provides formal verification for LLM-generated code by transforming it into a verifiable environment [2]. We will add a more detailed discussion with extended references in our revised paper. Thank you again for the advice. [1] Kang et al. C-RAG: Certified Generation Risks for Retrieval-Augmented Language Models, ICML 2024. [2] Lin et al. FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving, NeurIPS 2024. **Q1: What if safety specifications are unavailable?** **A5**: Thank you for the insightful question. GuardAgent is designed to follow user-desired safety rules. When there are no user-provided rules, GuardAgent could incorporate external knowledge by calling a searching API to establish a “constitution”. For example, when the target agent is designed for drug development, FDA regulations will be recommended with high probability. We will mention these potential extensions to GuardAgent in the discussion of future work. **Q3: Is code generation and execution always feasible in a general setting?** **A6**: Thank you for the insightful question. Based on our evaluation in Appendix P, GuardAgent can handle most text-based tasks using code generation and execution, which covers a substantial of the task space of LLMs and agents. One possible explanation is that most rules can be abstracted into decision trees, where the inputs are the action trajectories of the target agent and the output indicates whether a rule has been violated or followed. This structure makes decision-making through code generation and execution feasible in many cases. We will add the discussion above to support the design of code generation and execution. Thank you for the question. Thank you for your valuable comments. Please let us know if you have any follow-up questions.
Summary: This paper proposes GuardAgent, a novel framework to safeguard LLM agents by leveraging knowledge-enabled reasoning. The approach involves a two-step process where an LLM generates a detailed task plan from safety guard requests and then produces executable guardrail code via in-context learning with retrieved demonstrations. The authors introduce two benchmarks—EICU-AC for healthcare access control and Mind2Web-SC for web agent safety control—to evaluate the method. Experimental results indicate that GuardAgent significantly outperforms baseline methods (including models with well-crafted prompts and a model-guarding baseline) on multiple metrics (label prediction accuracy, explanation accuracy, etc.) without impairing the underlying agent’s task performance. Claims And Evidence: The paper claims that GuardAgent provides flexible and reliable guardrails for diverse LLM agents, improving upon baseline approaches by converting safety requests into executable code. While the experimental evidence supports high accuracy on the provided benchmarks, several claims remain partially unsubstantiated: - It is unclear how well the generated guardrails will hold up against dynamically evolving attack surfaces in real-world settings. - The experiments do not assess the consistency of guardrail outputs across multiple runs, which is crucial for reliable defense. Methods And Evaluation Criteria: The methodology, focusing on task planning and code generation, is interesting and well-motivated. The use of in-context demonstrations to bridge guard requests and executable checks seems to be effective. However, the evaluation would benefit from: - Additional metrics that capture the consistency and robustness of the guardrails over repeated executions. - Experiments addressing how the method adapts to dynamically changing attack surfaces. Theoretical Claims: The paper does not present detailed theoretical proofs; rather, it focuses on empirical validation of the proposed framework. The high-level conceptual reasoning appears sound, but the absence of theoretical guarantees regarding robustness against evolving attacks leaves some questions unanswered. Experimental Designs Or Analyses: The experimental design is thorough in terms of benchmark creation and comparison with a well-crafted baseline and an invasive guardrail approach. Nonetheless, the evaluation could be strengthened by: - Including additional state-of-the-art LLMs, especially those known for strong reasoning capabilities (e.g., OpenAI o1 and DeepSeek R-1). - Considering baselines that use an LLM agent as a judge for attack and risk assessment, which might offer a stronger point of comparison. - Demonstrating defense performance against more sophisticated and adversarial attack scenarios to assess scalability and generalizability. Supplementary Material: I reviewed the appendices, which appear comprehensive, providing further details on the design and experiments. Relation To Broader Scientific Literature: This work builds upon and extends ideas from recent works in LLM moderation (e.g., LlamaGuard) and agent-based reasoning. The integration of code generation for safety enforcement is a promising direction that aligns with the growing literature on using structured reasoning for reliability in AI systems. Essential References Not Discussed: While the paper cites a solid set of references, it would be helpful to see comparisons with recent works that employ multi-agent setups for risk assessment or those that specifically address dynamic adversarial environments. For instance, referencing recent advances in adversarial robustness for LLM agents or discussing related work on using a second LLM as a judge for potential risks would further situate the contribution. For example: Hua, Wenyue, et al. "Trustagent: Towards safe and trustworthy llm-based agents through agent constitution." Trustworthy Multi-modal Foundation Models and AI Agents (TiFA). 2024. Other Strengths And Weaknesses: Strengths: - Innovative use of in-context learning to generate code-based guardrails. - Clear empirical improvements over baseline methods on two benchmarks. - Non-invasive design that preserves target agent performance. Weaknesses: - Limited evaluation on how the approach handles dynamically changing attack surfaces. - Lack of metrics or analysis on the consistency of the generated guardrails. - Evaluation restricted to a small set of LLMs; more comparisons with cutting-edge models (e.g., OpenAI o1, DeepSeek R-1) would strengthen the paper. - Baseline comparisons could be extended to include approaches that use an additional LLM agent as a judge for potential attacks. Other Comments Or Suggestions: - Consider discussing potential strategies for updating or adapting guardrails in real-time as new attack vectors emerge. - Including a sensitivity analysis of the in-context retrieval process could shed light on the robustness of the approach. - A deeper discussion on scalability in more complex, multi-faceted scenarios would be beneficial. Questions For Authors: - How does GuardAgent adapt or update its guardrail code when facing dynamically changing attack surfaces over time? - Can you provide additional experiments or metrics that assess the consistency of the generated guardrails across multiple runs? - Have you considered evaluating GuardAgent with additional state-of-the-art LLMs, such as OpenAI o1 and DeepSeek R-1, to further validate its performance? - Could you elaborate on how the framework would perform under more sophisticated adversarial scenarios and whether it can scale to cover such cases? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and your positive feedback. We especially thank you for recognizing the importance of our work and our contributions. Please find our responses to your comments below. **W1&Q1&W5: Dynamically evolving attack surfaces and more sophisticated adversarial scenarios** **A1**: Thank you for the insightful question. Our GuardAgent is designed to effectively follow safety rules, such as access control requirements systematically, by generating corresponding codes and executing them; therefore it is flexible to integrate additional safety rules, such as the ones that could describe the potential attack strategies as "constitutions" to make GuardAgent more and more resilient over time by identifying more unknown adversarial strategies. With that being said, it is possible to enhance the resilience of GuardAgent continuously by periodically summarizing the uncovered new adversarial strategies, and this framework will be more efficient than traditional methods such as adversarial training! We will add such discussion in the paper following your suggestions and we believe this will lead to a series of interesting new research work! **W2&Q2: Consistency of guardrail outputs across multiple runs** **A2**: Thank you for your suggestion. We add a new experiment to show the consistency of GuardAgent over multiple runs. We test GuardAgent 5 times on the two datasets and show the percentage of examples where GuardAgent made 5, 4, 3, 2, or 1 correct predictions. For most examples, GuardAgent gave similar results in the 5 runs (with at least 4 out of 5 or no correct predictions). We will include this experiment in the revised paper. Thank you! |||5|4|3|2|1|0| |-|-|-|-|-|-|-|-| |EICU-AC|llama3|83.9|12.3|1.2|2.2|0.3|0| ||llama3.1|83.5|15.1|0.3|0.3|0.6|0| |Mind2Web-SC|llama3|80.5|1.0|1.5|0|1.5|15.5| ||llama3.1|71.0|13.0|3.0|1.5|3.5|8.0| **W3&Q3: Test on additional state-of-the-art LLMs** **A3**: Thank you for the valuable suggestion. Please find the evaluation results for GuardAgent on o1 and r1 in the table below: ||EICU-AC|||||Mind2Web-SC||||| |-|-|-|-|-|-|-|-|-|-|-| ||LPA|LPP|LPR|EA|FRA|LPA|LPP|LPR|EA|FRA| |r1|99.7|100|100|87.7|100|85.5|92.8|77.0|77.0|100| |o1|100|100|100|98.3|100|87.5|95.2|79.0|76.0|100| GuardAgent performs well (compared to the results in Table 1 of our paper) on these reasoning LLMs. We will add the results to our revised paper. Thank you for your suggestion! **W4&Q4: Comparison with LLM agent as a judge for risk assessment, e.g., TrustAgent.** **A4**: Thank you for bringing this important work to our attention! TrustAgent is indeed relevant to our work as it safeguards LLM agents based on constitutions established for diverse agent application domains. Below, we compare GuardAgent with TrustAgent, both with GPT-4. ||EHRAgent+EICU-AC|||SeeAct+Mind2Web-SC||| |-|-|-|-|-|-|-| ||Accuracy|Precision|Recall|Accuracy|Precision|Recall| |GuardAgent|98.7|100|97.5|90.0|100|80.0| |TrustAgent|52.8|53.9|63.7|47.0|47.5|56.0| Here, we focus on the risk assessment of TrustAgent and combine its “risky” categories into a single label to better fit the binary classification setting. Moreover, we replaced the original constitution of TrustAgent with the safety requests used in our work, which are more compatible with the two agents here. The results indicate that TrustAgent cannot adequately handle these safety requests. We discovered that the primary reason is that TrustAgent is designed to adhere to a general concept of safety, aimed at safeguarding textual-based agent *planning*. Conversely, GuardAgent verifies whether the agent's *execution process* (such as the code generated by EHRAgent) adheres to the established safety rules. Thank you for your comment! **W6: Sensitivity analysis of the in-context retrieval process** **A5**: Thank you for this constructive suggestion. We have included two sensitivity analyses regarding the in-context retrieval process. First, in Section 5.3, we found that GuardAgent still performs well with fewer demonstrations. Second, in Appendix M, we discovered that memory retrieval based on "least-similarity" instead of "max-similarity" results in only moderate degradation to GuardAgent’s performance. Here, we add an experiment to examine the impact of the quality of retrieved demonstrations. Instead of storing only the correct executions of GuardAgent in the memory bank, we store all executions indiscriminately. ||EICU-AC|||||Mind2Web-SC||||| |-|-|-|-|-|-|-|-|-|-|-| ||LPA|LPP|LPR|EA|FRA|LPA|LPP|LPR|EA|FRA| |llama3|93.0|100|86.4|86.4|100|81.5|98.5|65.0|64.0|100| |llama3.1|91.5|100|86.8|79.9|100|83.0|92.3|72.0|72.0|100| |gpt-4|90.2|90.2|91.3|87.6|89.6|81.5|89.0|73.0|72.0|100| Compared with Table 1 of our paper, there is a less than 9% drop in absolute safeguard accuracy across all configurations, demonstrating decent robustness of GuardAgent to the quality of the demonstrations. Please let us know if you have further questions. Thank you again!
Summary: GuardAgent is the first guardrail agent designed to monitor and regulate the actions of LLM agents. It operates by leveraging LLMs to translate security requirements into executable guardrail code. A memory module is utilized to enhance guardrail performance by retrieving past task demonstrations. Experimental results demonstrate high accuracy on two benchmarks: EICU-AC (for healthcare access control) and Mind2Web-SC (for web agent safety). Claims And Evidence: N/A Methods And Evaluation Criteria: N/A Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: minor contribution to scientific literature Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths - Ensuring security and safety in LLM systems is crucial in today's AI landscape, and this paper addresses a highly relevant issue. - The study presents practical scenarios, including EICU-AC (Healthcare agent) and the Mind2Web-SC (Mind2Web-Safety Control) benchmark, to evaluate the proposed approach. Weaknesses - The authors heavily rely on LLM capabilities to build a safety-guarding agent. However, the approach presented in Section 4 appears to be more of an engineering implementation rather than a novel conceptual advancement. This makes the paper feel more like a technical report than a research contribution. - The paper does not introduce any novel methodology to address security and safety challenges in LLMs. - Additionally, the experimental results do not seem to reveal any significant new insights or discoveries. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and your positive ratings! We are glad that you acknowledge the importance of the problem we are solving and the practicality of our settings. In the following, we reply to your concerns one by one. **W1: Novel conceptual advancement** **A1**: Thank you for your valuable comment. Our work introduces several key conceptual advancements, including the following: - Prior to our work, guardrails for LLMs typically referred to trained models that detect whether a target LLM’s inputs and/or outputs are harmful. We extend this concept by introducing guardrails for LLM agents that monitor whether an agent's actions comply with prescribed rules. - Existing guardrails for LLMs are model-based. In contrast, our approach demonstrates the advantages of using an LLM agent, rather than a standalone model, to safeguard other LLM agents. - Our work highlights the effectiveness of a reasoning-then-action pipeline – consisting of task planning followed by code generation and execution – augmented by knowledge retrieval from memory, in enhancing the safety of LLM agents. These conceptual contributions underscore both the novelty and the significance of our work. **W2: Novel methodology to address security and safety challenges in LLMs** **A2**: Thank you for your comment on our methodology. Our approach incorporates several key innovations: - Our guardrail mechanism is based on code generation and execution, providing strong reliability since all decision outcomes depend on the successful execution of correctly generated code. - We employ in-context learning for task planning and code generation, eliminating the need for model training. - We introduce an extensible toolbox for storing supportive functions used in code generation, which enhances GuardAgent’s flexibility in handling novel guardrail requests. These design choices have been empirically validated to be effective in achieving their intended goals, as demonstrated by our experimental results. **W3: Significant new insights or discoveries in the experimental results** **A3**: Thank you for your constructive comments. In our revised paper, we will highlight the following key insights derived from our experiments: - The comparison between our code-based guardrail and the text-based guardrail (i.e., the baseline) demonstrates the superior reliability of code-based guardrails in safeguarding LLM agents. - Our evaluation of GuardAgent on the two created benchmarks (EICU-AC and Mind2Web-SC) and the commonsense reasoning dataset CSQA underscores the generalizability of code-based guardrails. - Interestingly, we observed that LLM agents tend to generate code-based guardrails even when not explicitly instructed to do so. A potential explanation is that safety rules are often naturally represented as decision trees, where inputs consist of the target agent’s action trajectories and outputs indicate whether a rule has been followed or violated. This structure aligns well with a code generation and execution paradigm, making it a more suitable approach for safety decision-making in many scenarios. Thank you again for your valuable comments! We are happy to address your remaining concerns if there are any.
null
null
null
null
null
null
IMTS is Worth Time $\times$ Channel Patches: Visual Masked Autoencoders for Irregular Multivariate Time Series Prediction
Accept (poster)
Summary: This paper addresses forecasting on Irregular Multivariate Time Series (IMTS), where observation intervals are variable and there are missing values. The authors propose to use Masked AutoEncoder (MAE) to efficiently handle missing values in IMTS and scale pre-training/fine-tuning. They also propose to use a Graph Convolutional Neural Network (GCNN) for handling a variable number of channels in different time series, temporal period embedding for handling timestamps as a variable in the MAE framework, and the coarse-to-fine strategy for focusing on relevant temporal context in the forecasting of the specific timestamp. Experimental results on multiple datasets demonstrated that the proposed method performs better than the existing methods. Claims And Evidence: * Eq.6 is not justified well. It should be discussed in literature or theoretically. * The usage of GCN in Section 2.3.3. is not justified well. It should be discussed in literature or theoretically. * Eqs.11-15 for the time period embeddings are not justified well. It should be discussed in literature or theoretically. * Similar comments to the above for Sections 2.4.2. and 2.5. Methods And Evaluation Criteria: Make sense. Theoretical Claims: NA Experimental Designs Or Analyses: I checked Section 3 and the appendices. Supplementary Material: Yes. All parts. Relation To Broader Scientific Literature: The usage of MAE for time series forecasting with missing values is novel. Essential References Not Discussed: NA Other Strengths And Weaknesses: The proposed method may have novel points and have some practical impact, but the clarity issues are too severe to understand the method. * We tend to use "forecasting in this problem setting rather than "prediction." * Figure 1 is not referred to in the main text. * Figure 2 is hard to understand. Where are the results of DLinear and VisionTS? * In Section 2.2, L, N, and q^n_j are not defined. * Paragraphs "IMTS Representation" and "IMTS Prediction" look to share some portion of notations, but their relationship and dependencies are not explicitly described, which makes it hard to follow Section 2.2. * The relationship between t_start^p, l_p, and r_p is unclear. * In Eqs.2, 5, and 7, "||" might be some operator, but it is not defined. * In Eq.3 and the description under Eq.3, L_p and D_in are not defined (L_p can be s, maybe?). The character d may be used in different meanings from Eq.1. And, what is the meta-filter? * In Eq.4, the superscript "*" is not defined. * In Eq.4, can the subscript l_p:r_p be just i and remove [i]? * In Eq.5, the variable m is used inconsistently with subscript and without subscript. * Eqs.7-9 are hard to understand. What is the superscript s? Why k is in {1,2} * The equation in l.167 of the right-hand side of p.4 is incorrect. * In the equation in l.180 of the right-hand side of p.4, R^2D should be R^Px2D. * Eqs.11-15 are hard to follow. * In Eq.20, TPE does not require the projection into the input dimension of the decoder? * In Eq.21, the variable e is used inconsistently from the previous formulation. * The losses in Eqs.26 and 27, operation (summation or averaging) over n lacks. * In l.312 of the left-hand side of p.6, it is better to use consistent notations with the main text (x should be used, not y). * Tables 1 and 2 appear in an inverted order. Other Comments Or Suggestions: NA Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and recognition of our work’s **novelty and practical impact**, addressing each point below for clarity. ## Overview of Method VIMTS processes IMTS by: 1) segmenting into fixed intervals with variable-length points per channel; 2) extracting unified patch representations via dynamic convolution; 3) compensating missing values via GCN capturing inter-channel dependencies; 4) modeling temporal patterns across P×N patches using MAE; 5) pre-training with masked patch reconstruction to transfer MAE’s sparse processing, then fine-tuning; 6) coarse-to-fine decoding (patch reconstruction → timestamp queries via continuous embeddings). This vision-inspired pipeline uniquely adapts to irregularity-aware forecasting. Notations will be clarified in revision. ## Methodological Clarifications ### Q1: Justification of Channel Embeddings (Eq.6) Learnable channel embeddings $e_n$ model channel-specific traits (units, stats, missing patterns), inspired by positional and cross-modal retrieval’s modality embeddings [3]. They distinguish heterogeneous channels (e.g., physiological vs. environmental) during cross-channel compensation and temporal modeling. ### Q2: GCN Usage in Cross-Channel Interaction (Sec 2.3.3) Our GCN addresses sparse/misaligned observations by modeling bidirectional channel dependencies. We fuse static embeddings (inflow/outflow nodes) with dynamic patch features via gated fusion: $g_{p,k} = \text{ReLU}\left(\tanh\left([H_p \parallel E_k^s]W_k^g\right)\right),$to obtain hybrid embeddings $E_{p,k}$. The adaptive adjacency matrix $A_p = \text{Softmax}(\text{ReLU}(E_{p,1}E_{p,2}^\top))$ dynamically captures directional relationships, enabling context-aware message passing. Ablations (Tab 3) confirm GCN’s necessity in compensating missing values through cross-channel interactions, aligning with evolving graph structures in spatio-temporal GNNs [1,2]. ### Q3: Time Period Embeddings (Eq.11-15) Our TPE adapts ViT’s 2D positional encoding [4] for temporal data: horizontal axis encodes patch order $p$, vertical axis fixes duration $s$. Initialized with sinusoidal functions, it preserves MAE’s positional awareness while adapting to IMTS patterns during fine-tuning. This dual-axis design jointly maintains sequential order and temporal scale information. ### Q4: MAE Reconstruction Mechanism (Sec.2.4.2) MAE reconstruction masks $r%$ patches (e.g., 60% for PhysioNet) to simulate missing segments. The model infers missing regions via neighboring patches and cross-channel dependencies. The decoder fuses visible tokens $z_p$ and mask tokens $[M]$ (projected via $W_{dec}$) with TPE ensuring temporal coherence, extending MAE’s sparse processing [6] to irregular series through patch-level masking. ### Q5: Patch-to-Point Prediction (Sec.2.5) Our coarse-to-fine strategy bridges patch-level semantics and timestamp specific queries: (1) MAE decoder reconstructs dense patch representation $ẑ_{i_q}^m$ aggregating temporal patterns and cross-channel contexts. (2) Continuous-time embedding $\phi(t_q)$ injects fine-grained temporal context. (3) An MLP fuses $ẑ_{i_q}^m$ and $\phi(t_q)$ for final prediction $\hat{x}_q$. This hierarchical approach mirrors image super-resolution (low-resolution features guide high-detail reconstruction) adapted for temporal irregularity via timestamp embeddings. ## Technical Corrections - Notation - $L$: total timestamps; $N$: channels; $q_j^n$: channel-specific queries; unified $\mathcal{O} = (\mathcal{T}, \mathcal{X}, \mathcal{M})$ - $t_\text{start}^p$: patch. start; $l_p/r_p$: first/last timestamp index - "$\|$": feature concatenation - Equation Fixes - Eq.5 : $m_p → m_p^n$ (channel-specific mask) - Eq.20: Dimension $\mathbb{R}^{P \times 2D}$ - Losses (Eqs.26-27): Normalized by ($|\mathcal{Q}|$) - Figs/Tabs - Fig 1 : Added contrast with prior work - Fig 2: Adjusted DLinear/VisionTS scales - Tab 1 & 2: Order corrected. ## Reviewer-Specific Queries ### Q6 (Eq.167) Fixed as $\mathbf{H}'_p = [\mathbf{H}_p \| \mathbf{H}_p^\text{gcn}] \in \mathbb{R}^{N \times 2D}$. ### Q7 (Eq.3) Meta-filter = MLP generating adaptive convolution kernels [5] for variable-length inputs. ### Q8 ($k$ in Eqs.7-9) $k \in \{1,2\}$ denotes inflow/outflow embeddings for bidirectional dependencies. ### Q9 (Loss) Loss averages over queries ($|\mathcal{Q}|$), not channels, due to timestamp misalignment. [1] Graph wavenet for deep spatial-temporal graph modeling. Arxiv. [2] Irregular multivariate time series forecasting: A transformable patching graph neural networks approach. ICML24. [3] Cross-modality transformer for visible-infrared person re-identification. ECCV22. [4] An image is worth 16x16 words: Transformers for image recognition at scale. Arxiv. [5] Irregular traffic time series forecasting based on asynchronous spatio-temporal graph convolutional networks. KDD24. [6] Masked autoencoders are scalable vision learners. CVPR22.
Summary: The paper introduces VIMTS, a framework adapting MAE for IMTS prediction, addressing challenges like unaligned signals and missing values. Unlike existing methods that separately model temporal and channel patterns, VIMTS enhances representation learning by transforming sparse signals into image-like patches, capturing cross-channel dependencies. The model leverages MAE’s pre-trained capability for patch reconstruction and refines predictions with a coarse-to-fine approach. Additionally, it integrates self-supervised learning with supervised fine-tuning. Extensive experiments demonstrate VIMTS’s performance and few-shot capability, expanding the application of visual foundation models to time series prediction. -------Reply after rebuttal: Thank you for the response, it addresses most of my concerns. However, after considering multiple factors, I’ve decided to maintain my original score. Claims And Evidence: The claims in line 23 (left) and lines 29-30 (right) on page 1 stating that existing pre-training models are limited to UTS are inaccurate, as many existing studies have already considered multivariate time series. Methods And Evaluation Criteria: The model designed in the paper is both reasonable and effective for addressing the IMTS modeling problem. The dataset used is a commonly employed real-world dataset in IMTS modeling, which also provides practical guidance for solving real-world problems. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: The experiments in the paper are sufficient, with appropriate comparison methods and thorough analysis. The use of commonly employed real-world datasets enhances the credibility of the results. Supplementary Material: I have read the whole supplementary material. Relation To Broader Scientific Literature: This paper details efforts to advance IMTS forecasting. The paper provides a new perspective on the forecasting problem of IMTS to some extent and effectively improves forecasting performance. Essential References Not Discussed: Li Z, Li S, Yan X. Time series as images: Vision transformer for irregularly sampled time series[J]. Advances in Neural Information Processing Systems, 2023, 36: 49187-49204. Other Strengths And Weaknesses: 1. The authors should conduct a more in-depth comparison and analysis between this work and **ViTST [1]**. If the authors choose not to include this comparison, the difference in tasks should not be used as an excuse, as both models first learn representations and then use a projection head for downstream tasks. 2. The contributions section is overly lengthy. 3. Since the model incorporates multiple GCN layers, it would be helpful for the authors to compare the **time and space complexity** of the proposed method with other SOTA models. [1] Li Z, Li S, Yan X. Time series as images: Vision transformer for irregularly sampled time series[J]. Advances in Neural Information Processing Systems, 2023, 36: 49187-49204. Other Comments Or Suggestions: 1. **Figure 2** contains too many elements, making the focus unclear. It would be beneficial to simplify the figure by selecting key components that highlight the advantages of the proposed method. 2. In **Figure 3**, the legend is only represented by letters, which is not sufficiently clear. It is recommended to add descriptive text, such as "Channel Mask", to improve clarity. Questions For Authors: See Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of the practical significance and our innovation for improving IMTS forecasting performance, as well as the insightful feedbacks. Below, we address each concern raised by the reviewer in detail: ## Q1: Incorrect claims regarding pre-trained models on MTS We will clarify in the revision that our claim applies specifically to IMTS forecasting scenarios, while most existing pre-trained model based forecasting methods are for regular sampled time series data. ## Q2. Comparison with ViTST We appreciate the reviewer's suggestion. Although both ViTST [1] and VIMTS both employ image-like representations for time series, there are key distinctions: - **Differences in Innovation Points**: To extract information from IMTS data, **ViTST** deal with missingness with **interpolation and transformation from IMTS to images**, which causes **information loss with computational overhead**, makes inputs less precise for understanding patterns in data, failing to perform well in forecasting. In contrast, **our model divides data by time intervals, patchifying to time x channel patches and extracts block features** without interpolation, which preserves **data integrity** and creates more precise inputs for MAE to model internal data structures without computational overhead of image construction. Further, it explicitly models inter-channel interaction with GCN, compensating for missingness across channels. Our analysis is further supported by empirical analysis, which evaluates the model performance onPhysioNet. |Method| MSE($10^{-3}$) | MAE($10^{-2}$) | | - | - | -| | VIMTS (Ours)| 4.81 ± 0.07| 3.54 ± 0.04| | ViTST| 66.37 ± 0.08| 20.16 ± 0.05| | VisionTS (Zero interpolation) | 42.41 ± 0.02 | 13.13 ± 0.02 | Note that this experiment also involves VisionTS [2] with similar image-based methods, and confirms that despite its powerful visual MAE framework and strong performance on regular time series, it similarly deteriorates when processing irregular data. - **Computational Cost**: **VIMTS** employs a visual MAE backbone (base) with limited number (around 3) of GCN layers. Our hyperparameter experiments show that a lightweight configuration (node features - 5 ~ 10 dimension, and projectors - 32 * 32 ~ 64 * 64) is optimal, with no significant benefits from additional complexity. In comparison, **ViTST** uses a Swin Transformer as the visual backbone and a Robert as the text backbone, resulting in considerable computational costs, the quantified experimental results shown in the table in Q3 further validate our claims. ## Q3. Time and space complexity analysis We acknowledge the importance of complexity analysis, and will include the table below comparing parameters and inference times in the revision. | | VIMTS(base)| VIMTS(base) w/o GCN | ViTST | tPatchGNN | CRU | WrapFormer | | - | - | - | - | - | - | - | | **Trainable Param**| Pretrain: 111.01M Finetune: 332.61k | Pretrain: 110.98M Finetune: 332.61k | 212.31M | 943.99k | 28.65k | 178.06K | | **Avg Training Time (s) / epoch** | 87.80 | 84.96 | 140.20 | 17.14 | 172.45 | 22.67 | | **Avg Inference Time (ms) / instance** | 4.82 | 4.34 | 7.09 | 1.60 | 7.66 | 5.48 | **Parameter Numbers**: VIMTS utilizes visual MAE-base as backbone, which can be finetuned on a single RTX 4090, the GCN layers in VIMTS are lightweight, contributing only around 32.4k parameters. The number of trainable parameters is acceptable in real-world application and lower than other vision foundation model based methods like ViTST. **Time Complexity**: VIMTS achieves efficient training (87.8s/epoch) while its inference speed (4.815ms/instance) outperforms ViTST (visual foundation model - 7.094ms/instance), CRU (Neural-ODE - 7.655ms/instance), and WrapFormer (Transformer - 5.475ms/instance), making it practical for real-world deployment. Though it's slightly slower than lightweight models like tPatchGNN (1.604ms), this trade-off is justified by superior accuracy and improved data efficiency from pre-training—critical in medical applications where sub-5ms latency remains acceptable. ## Q4. Revisions for better readability As for Figure 2, we will revise it by selecting representative baseline methods, adjusting the scale and using a clearer color scheme to highlights the model's superior performance and few-shot ability. As for Figure 3, we will add more intuitive and descriptive text to the legends in the figure to enhance the readability. As for the Contribution Section, we will streamline it in revision to focus on our core insights and make it more concise. ## Reference [1] Time series as images: Vision transformer for irregularly sampled time series, NIPS23 [2] Visionts: Visual masked autoencoders are free-lunch zero-shot time series forecasters, arXiv:2408.17253.
Summary: This paper presents VIMTS, which exploits the ability of visually pre-trained MAEs to model semantically sparse multi-channel data for IMTS prediction. Specifically, IMTS data is treated as image-like patches across temporal and channel dimensions during the encoding process, which are divided into equally spaced patches where TTCN extracts in-channel features and GCN aggregates cross-channel information to complement the patch representation with missing values. For decoding, VIMTS employs a coarse-to-fine technique that first generates patch-level predictions and then refines them into precise time-point predictions. Extensive experiments on real datasets validate the effectiveness of VIMTS. ## update after rebuttal The author's response has solved some of my confusion and I keep my score. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes, I have reviewed the supplementary material Relation To Broader Scientific Literature: This paper is inspired by the multi-channel semantic sparse information modelling capability and time series domain adaptation of visual MAE, which effectively improves IMTS prediction performance. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1, VIMTS performs well. Compared to previous methods such as T-PATCHGNN, VIMTS shows better prediction performance on three datasets. 2, The methodology is novel. Inspired by VAE, VIMTS combines supervised fine-tuning with self-supervised learning to enhance IMTS modelling. Weaknesses: 1, Figure 3 is difficult to understand, what does Lssl, Lft, Φ mean? Suggest adding figure notes. 2,The datasets listed in Table 1 all have a relatively small number of variables, which does not adequately validate the effectiveness of self-supervised training in irregular multivariate time series forecasting tasks. Some datasets with more number of variables are suggested to be added. For example, the MIMIC dataset. 3, Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of the **novelty** and our efforts to leverage the **multi-channel semantic sparse information modeling capability** and **time series domain adaption** of visual MAE. Regarding the ambiguity you pointed out and your other concerns, we address them below. ## Q1: Revisions to notations in Figure 3 We appreciate your careful review about the ambiguity in the notations used in Figure 3. To clarify the specific notations you mentioned: - $L_{ssl}$: This represents the L1 loss value calculated during the self-supervised learning phase. - $L_{ft}$: This denotes the L1 loss value calculated during the fine-tuning phase. - $Φ(t)$: This represents the time embedding for a given timestamp t, with the explicit formula provided in Equation (1), we utilize it to generate precise prediction from the matching time period and channel representations. We will incorporate these detailed explanations into the caption of Figure 3 to enhance its clarity. Furthermore, we will include a comprehensive notation summary table within the appendix to improve overall readability. ## Q2: Suggestions about validating the effectiveness of self-supervised learning Thank you for highlighting the importance to validate the **effectiveness of self-supervised learning**. Our framework inherently benefits from multi-channel datasets. Our self-supervised learning (SSL) strategy effectively **adapts the vision pre-trained model to multi-channel time series data**, enabling our model to more **efficiently extract information from different time intervals** and improve performance on forecasting tasks. We will address this issue from both theoretical and empirical perspectives. **Theoretical Analysis**: With the **GCN module**, which captures inter-channel interactions and complements missing values, VIMTS effectively transforming the IMTS problem into a regular sampled multivariate time series (MTS) problem. Given that MAE-based SSL is effective in learning temporal structures under complete data conditions, thus our SSL framework can leverage these representation to enhance the IMTS modeling. This theoretical advantage leads to improved performance on multi-channel datasets like MIMIC. Furthermore, as the vision pre-training is initially performed on 3-channel data (**RGB space**), our SSL strategy is crucial for adapting the model to more diverse application scenarios with a **greater number of channels**. By learning from other channels and temporal contexts, our model can effectively generalize from the 3-channel pre-trained state to handle more varied and complex data. **Empirical Analysis**: We sincerely appreciate the reviewer’s suggestion to validate our method on high-dimensional datasets. Following this guidance, we conducted new experiments on the MIMIC dataset (with $96$ channels) and compared against the state-of-the-art t-PatchGNN. Relevant data and a supplementary figure are available in this website: **[anonymous repo]**(https://anonymous.4open.science/r/VIMTS-Rebuttal-028D/). Our ablation study and few-shot experiments clearly demonstrate a trend: with the SSL strategy, our model exhibits significant **data efficiency and strong performance even with limited data**. This supports our argument that SSL is effective in multi-channel scenarios. Moreover, our results demonstrate that VIMTS achieves superior performance compared to the current state-of-the-art t-PatchGNN, as shown in the table below: (mean of the results from 5 seeds) | Metric | VIMTS | t-PatchGNN (Reported) | |--------|----|-----------------------| | MSE | $1.3654 \times 10^{-2}$ | $1.69 \times 10^{-2}$ | | MAE | $6.4482 \times 10^{-2}$ | $7.22 \times 10^{-2}$ | Notably, we identified inconsistencies and bugs in t-PatchGNN’s preprocessing pipeline for MIMIC (refer to their GitHub issues). Upon re-evaluating their model with corrected preprocessing, VIMTS still exhibits competitive MSE (vs. t-PatchGNN’s $0.013608 \pm 0.000179$) and superior MAE (vs. $0.0656 \pm 0.001141$). This validates that VIMTS not only scales effectively to high-dimensional IMTS but also generalizes robustly across varying missingness patterns. These additional experiments further confirm our theoretical analysis: the GCN-enhanced SSL framework more effectively leverages inter-channel correlations, leading to stronger performance in real-world IMTS scenarios. We will include these results in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thanks to the author for the reply. Taking into account my comments and those of the other reviewers, I think that there is a real problem with the readability of the current paper. Therefore I decide to keep my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your valuable feedback and for dedicating time to review our work. Readers familiar with irregular multivariate time series forecasting would likely find our presentation more accessible. Nevertheless, we recognize the importance of clear presentation and will implement your suggestions to enhance figures, refine notations, and improve module descriptions in our revision. We believe that with the feedback from you and other reviewers, our revision will be more clear and our contributions will be more effectively communicated to the community.
Summary: This paper proposes a new approach for Irregular Multivariate Time Series (IMTS), characterized by unaligned multichannel signals and massive missing values. Instead of modeling temporal and channel patterns in isolation, as in most of current research, this paper proposes VIMTS, a framework that adapts Visual Mask autoencoder (MAE) for IMTS prediction and jointly model temporal-channel patterns. The intuition is that formatting IMTS into image-like patches will utilize cross-modality learning. The core idea is to utilize a pre-trained vision MAE (He et al., 2022) to model time period patches of IMTS data with multichannel information. Claims And Evidence: The claims that the proposed approach addresses the aforementioned issues are generally well supported. See detailed comments and suggestions in sections below. Methods And Evaluation Criteria: The architecture in this paper consists of three main components: time×channel patchify, time-wise reconstruction, and the patch2point Prediction. The authors claim the time*channel patchification is one of the innovations of this paper. However, this approach seems to be similar to the approaches used in other papers, such as Jungo, J., Xiang, Y., Gashi, S., & Holz, C. (2024). Representation Learning for Wearable-Based Applications in the Case of Missing Data. *Human-Centric Representation Learning workshop at AAAI 2024*. It is also a natural way to adopt the vision MAE to the time series. Therefore, please clarify the difference of your approach and innovation. Theoretical Claims: NA Experimental Designs Or Analyses: Experiments are included and results show the proposed method perform well comparing to baselines. The paper also includes a comprehensive ablation study to check the role of each component of the model architecture. The major comment is that while there are about 20 baselines selected, not all of them are state of the art. In particular, the paper would benefit from comparing to approaches that use the transformer architecture and incorporates the masking approach. Supplementary Material: All of them. Relation To Broader Scientific Literature: Although several studies have explored applying Vision Masked Autoencoder (MAE) methods to time series data, the challenges posed by irregular sampling frequencies and significant missing data remain common and critical, particularly in healthcare applications. This paper addresses these gaps and has good potential for broad application in practical scenarios. Essential References Not Discussed: The literature review of this paper is comprehensive. Other Strengths And Weaknesses: The notation part could use more detailed definitions. For example, please clarify the definition of notation N and L. While their meaning can be interpreted from the context, it would be more rigorous to formally define them and explain their practical meaning in applications. A thorough check of the notation and the formulae is recommended. In line 312, the general definitions of MAE and MSE are given. However, it is not clear the definition of $y$ in the context of the tasks of this paper. Please further specify the definition of such errors. Other Comments Or Suggestions: It would be helpful to discuss the trade-off between the model's performance and computational efficiency, particularly given the multi-step nature of the proposed approach. Questions For Authors: None. Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We deeply appreciate the reviewers' recognition of our method's **novelty**, **comprehensive experiments**, and **potential for broad application**. We address your concerns below: ## Q1: Originality and differentials from Jungo et al. While both works explore patchification, our core innovation lies in: 1. **Enhanced Channel Dependency Modeling with GCN:** VIMTS employs GCNs to learn both static and dynamic information in a asymmetric manner, therefore model bidirectional inter-channel dependencies, enabling explicit cross-channel compensation for missing values—which is more consistent with the real information dependence situation than Jungo et al.'s projection-only method. Ablation studies verify Its effectiveness. 2. **Leveraging Vision Pretraining:** We utilize vision pretraining capabilities, providing strong initialization for learning sparse patterns from limited data. It's a vital component not explored in Jungo et al. 3. **Coarse-to-fine Strategy**: As confirmed in our ablations, VIMTS forecasting with a encoder-decoder architecture, enabling more precise predictions generated from related time-segment and channel contexts, outperforming direct projection the approach like Jungo et al., which generates prediction through global projection features ## Q2: Comparison to other transformer based baselines with mask reconstruction Our experiments have included SOTA Mask-Reconstruction (MR) transformer variants like PatchTST and VisionTS, demonstrating VIMTS's superior performance: **Limitations of Existing Methods on IMTS:** While the transformer with MR like **PatchTST** [3] perform well on regular sampled multivariate time series (MTS) data, they lack explicit mechanisms to handle irregular sampling and inter-channel modeling for missingness complementation. This leading to significant performance drops in IMTS data. Moreover, **VisionTS** [2], a vision foundation model with MR, performs well on MTS but fails to generalize to IMTS tasks due to its reliance on gridding, resizing, and interplotation when processing IMTS. Similarly, **MOMENT** [1], a time series pre-trained foundation model with MR underperforms even PatchTST in MTS forecasting (MOMENT - Table [1]) and cannot address irregular sampling or missing value issues. **Ablation Study Reinforces VIMTS's Design:** As mentioned in the comparison to Jungo's model, encoder-only architectures struggle with IMTS data even when using self-supervised learning. Our coarse-to-fine prediction strategy proves essential for achieving precise time-segment and channel-specific forecasts. ## Q3: Revisions of notations and definitions For improved clarity and technical rigor, we will add a **notation table** in the appendix and better integrate figures/tables throughout the manuscript. Regarding the ambiguity you've mentioned: * $N$: total number of channels; $L$: count of unique timestamps. * $q_j^n$: $j$-th query timestamp in channel $n$. * $y_i$: ground-truth value at $i$-th query timestamp; $\hat{y}_i$: model's prediction. ## Q4: Model performance and efficiency trade-off Our self-supervised learning (SSL) effectively offers a **favorable trade-off** with: * **High Data Efficiency and Few-Shot Performance:** With SSL, VIMTS achieves almost **SOTA** on three IMTS datasets using only **20% of the training data**. This significantly reduces the required training data volume, enabling faster model development and deployment in data-scarce scenarios. * **Manageable Model Complexity:** Our analysis shows **excellent performance without excessive scaling**, requiring only 2-4 GCN layers and several simple mlp predictor except for the MAE backbone. In more previous trials, scaling MAE beyond the base level did not improve results and caused memory issues, aligning with the case in VisionTS [2], which applies visual MAE to MTS forecasting. * **Improved Performance with Controllable Computation Cost:** On PhysioNet, our ssl delivers over 10% performance improvement for ~40% additional training time—a worthwhile trade-off for three reasons: 1.Our model relies on less training data for the same results, mitigating the data constraints. 2.Our training of each epoch remains faster than ODE-based and other vision based methods (**Reviewer vVGF Q3**) and is acceptable in real-world applications. 3.SSL adds no inference cost. VIMTS maintains faster inference than most competitors while staying competitive with t-PatchGNN. Since real-world applications deployment scenarios often face data limitations and prioritize inference efficiency over training costs, and this cost is also acceptable, consequently this trade-off provides substantial practical benefits. ## Reference [1] MOMENT: A family of open time-series foundation models, ICML24 [2] Visionts: Visual masked autoencoders are free-lunch zero-shot time series forecasters, arXiv:2408.17253 [3] A Time Series is Worth 64 Words: Long-term Forecasting with Transformers, ICLR23.
null
null
null
null
null
null
Weight matrices compression based on PDB model in deep neural networks
Accept (poster)
Summary: This paper studies the problem of DNN weight compression for seek of better generalization. They propose the Population Double Bulk (PDB) model to characterize the eigenvalue behavior of $\mathbf{W}^\top\mathbf{W}$ in the bulk+spikes phase during DNN training, generalizing the existing Population Unit Bulk (PUB) model to allow for finer characterization of the bulk eigenvalues (from one bulk in PUB to two bulks in PDB). The paper further investigates the asymptotic limits of the proposed model based on Random Matrix Theory (RMT), which allows for new algorithm design to (i) estimate the PDB model parameters based on observed empirical eigenvalues; and (ii) compress the model based on the estimated PDB model by pruning the eigenvalues inside the smaller bulk (second bulk). Empirical results demonstrate the effectiveness of the proposed method in terms of improving generalization. Claims And Evidence: Mostly yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. But the reviewer is concerned with the proof of Theorem 3.4 (Appendix G). The proofs are overly simplified and should be more concrete to make the paper self content and convincing. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. The proof of the theoretical result parts. Relation To Broader Scientific Literature: The paper contributes to the literature of DNN weight compression based on the model of population bulks. Specifically, the paper proposes a finer characterization of the limiting spectral distribution via including two classes of bulked eigenvalues. New algorithms to compress the weight matrices are proposed accordingly. Essential References Not Discussed: To the knowledge of the reviewer there are no lacking essential references. Other Strengths And Weaknesses: **Strengths:** From the reviewer's perspective, the most important contribution of the paper is that the proposed new model of the weight matrix LSD (PDB) gives better approximation of the empirical spectral distribution given the experimental results shown in the paper. This better approximation also results in potentially improved compression algorithm in terms of generalization performance. The model is clean and quite direct to obtain from existing models. **Weakness:** 1. The improvement of the testing accuracies w.r.t. the base model on the tasks considered in the paper is relatively limited (mostly less than 1%), which makes the strength of the overall method weakened. 2. The paper only discusses PUB model and the compression methods based PUB. Little is discussed and compared about other potential compression based method for improving generalization. 3. What is the computational overhead of the compression algorithm proposed in this paper compared with the algorithms mentioned in the paper that are based on PUB? The paper does not touch on this perspective. Other Comments Or Suggestions: Please see the **Weakness** part of the review. Questions For Authors: Please see the **Weakness** part of the review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your recognition and the valuable suggestions. Please find our response below. (**T** for Theoretical Claims, **W** for Weakness) **1. Detailed proof of Theorem 3.4 [T]** We are sorry for the overly simplified proof of Theorem 3.4 in our paper. We will include more detailed and concrete proofs of Theorem 3.4 in the final version. Below is a brief outline. 1. For $\widehat\Theta_\text{bulk}$ , its consistency relies on the convergence of ESD $F_n^{W^TW}(x)=\frac1p\sum_\text{j=1}^p \mathbb{1}(\lambda_j \leq x)$ to the LSD (defined on page 1 in our paper). Specifically, $$ \widehat\Theta_\text{bulk} \xLeftrightarrow{Eq. (9)(10)} ESD\xrightarrow{\text{Theorem 3.1}} LSD \xLeftrightarrow{Eq. (4)} \Theta_\text{bulk} $$ 2. For $\widehat\Theta_\text{bound}$, its consistency relies on the convergence of $\widehat{\Theta}_\text{bulk}$. Specifically, $$ \widehat\Theta_\text{bound} \xLeftrightarrow{Eq. (5)(11)(12)} \widehat\Theta_\text{bulk} \xrightarrow{\text{Theorem 3.1}} \Theta_\text{bulk} \xLeftrightarrow[Eq.(5)(6)]{\text{Theorem 3.2}} \Theta_\text{bound} $$ 3. For $\widehat\Theta_\text{spike}$, its consistency relies on the convergence of $\widehat\Theta_\text{bulk}$, $\widehat\Theta_\text{bound}$ and $\lambda_\text{spike}$. Specifically, $$ \widehat\Theta_\text{spike}\xLeftrightarrow{Eq. (6)(7)(11)}\widehat\Theta_\text{bulk}, \widehat\Theta_\text{bound},\lambda_\text{spike} \xrightarrow{\text{Theorems 3.1-3.3}}\Theta_\text{bulk},\Theta_\text{bound},g(\alpha_\text{spike})\xLeftrightarrow{Eq.(7)(11)}\Theta_\text{spike} $$ **2.About improvement of test accuracy and comparison with additional methods [W1,W2]** In model compression, besides accuracy, another important metric is the **compression ratio** (ratio of rank of $W$ after and before compression). We aim to compress $W$ while maintaining accuracy, seeking **a balance between accuracy and compression ratio**, rather than merely pursuing high accuracy. In this rebuttal, we include **additional experiments on BERT and T5-base** and compare with **two other compression methods** : ​ 1. navie SVD (using 0.55 as the empirical compression ratio, Shmalo et al. (2023)), ​ 2. sparse low-rank (SLR) (Sridhar Swaminathan et al., *Neurocomputing*, 2020). The table below presents the accuracy and compression ratio. | Network | Noise | Base | PDB | SLR | PUB | naive SVD | | :--------------: | :---: | :---: | :--------------: | :----------: | :----------: | :--------: | | FCNN:MNIST | 30% | 0.801 | 0.808(30.1%) | 0.789(27.3%) | 0.793(13.9%) | 0.803 | | ResNet18:CIFAR10 | 30% | 0.642 | 0.658(15.8%) | 0.619(11.1%) | 0.64 (7.2%) | 0.647 | | VGG16:CIFAR10 | 30% | 0.699 | 0.705(6.4%) | 0.701(9.1%) | 0.686 (4.3%) | 0.7 | | BERT: RTE | 0% | 0.703 | 0.732(2.3%) | 0.717(3.3%) | 0.717 (9.6%) | 0.703 | | BERT: SCITAIL | 0% | 0.906 | 0.916(2.1%) | 0.918(3.3%) | 0.913(12%) | 0.906 | | T5-base: RTE | 0% | 0.717 | **0.754(1.7%)** | 0.725(16.3%) | 0.732(8.1%) | 0.717 | | T5-base: SCITAIL | 0% | 0.921 | 0.924(2.3%) | 0.901(9.8%) | 0.917(4.7%) | 0.92 | | Average | | 0.77 | **0.785(8.67%)** | 0.767(11.5%) | 0.771(8.5%) | 0.771(55%) | Our method achieves the best accuracy, with a **maximum improvement of 5.2%** ($\frac{0.754}{0.717}-1$), and maintains a competitive compression rate, reaching 1.7% for T5-base: RTE. Moreover, in the model compression literature, accuracy improvements in general are not substantial, e.g., 0.7% in Hyeji Kim et al. (CVPR, 2019) and 0.36%–0.55% in Georgios Georgiadis (CVPR, 2019). **3. Computational overhead of compression algorithm [W3]** The table below compares the computational efficiency, with time measured in seconds for total execution and inference after data processing on NVIDIA L40 GPUs (Ubuntu 22.04). | DNN | PDB | PUB | naive SVD | SLR | | :-----: | :------: | :--: | :-------: | :--: | | VGG16 | 42 | 41 | 47 | 53 | | Resnet | 19 | 15 | 18 | 23 | | VIT | 236 | 231 | 243 | 256 | | T5-base | 47 | 44 | 45 | 42 | | Bert | 20 | 15 | 18 | 29 | | Average | **72.8** | 69.2 | 74.2 | 80.6 | As shown, the computational overhead of our PDB model is slightly higher than PUB. However, this additional overhead is relatively modest and is outweighed by the improvements in compression performance and model accuracy. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response, and most of my concerns are addressed. I have raised my score to 3 accordingly. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your valuable comment and the time you dedicated to reviewing our work. Thank you for recognizing our revisions and kind support.
Summary: The paper introduces the population double bulk (PDB) model, an extension of the population unit bulk (PUB) model, to provide a more accurate description of the spectral properties of the weights in deep neural networks. In the PDB model, the informative components of the spectrum are captured by the spikes and the first bulk, while the second bulk corresponds to noise. Empirical results demonstrate that the PDB model outperforms the PUB model in terms of both performance and alignment. Furthermore, the PDB model is used to compress the weight matrices of convolutional neural networks (CNNs) into a low-rank structure by discarding the singular values associated with the second bulk. Networks compressed using the PDB model achieve superior accuracy compared to those compressed with the PUB model. Claims And Evidence: Tables 3-4 and Figure 3 demonstrate that the PDB model captures the empirical observations more effectively than the PUB model. The paper claims that the proposed algorithms offer the optimal compression ratio. While Figure 6 provides some support for this claim, there is a concern regarding whether the ratio is indeed the best recommendation. Specifically, the performance does not show a significant decline immediately after the values of $\beta$ or $\lambda_+$ in Figure 6. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriately chosen for the problem. Theoretical Claims: I checked the correctness of the proof of Theorem 3.1 and it seems correct. Experimental Designs Or Analyses: The experimental setup for the DNN training is consistent with established practices in the literature. However, it remains unclear whether the PDBLS and noise-filtering algorithms offer a better compression ratio than the PUB models. Figure 6 presents only the parameters of the PDB models, and Table 5 displays only the compression results for the PDB model, making it difficult to directly compare the compression quality between PDB and PUB models. Additionally, it would be insightful to include a comparison between the PDB model and a naïve low-rank compression method that selects the top-K singular values. Supplementary Material: I reviewed Chapter D for the proof of Theorem 3.1. Relation To Broader Scientific Literature: The algorithm to identify the signal and the noise in the spectral density would be useful in many fields of machine learning and numerical methods for scientific applications. Essential References Not Discussed: Related works are well addressed. Other Strengths And Weaknesses: **Strengths** 1. The paper is well-motivated by the fact that the PUB model is based on restrictive assumptions. The proposed spectral model effectively addresses this issue, enhancing the practicality of the population bulk model family. 2. The proposed method has significant potential for adoption across various subcategories of machine learning and deep learning. **Weaknesses** Most of my concerns relate to clarity. 1. It is unclear under what assumptions the training process produces the bulk+spike phase. 2. The experimental results could be further strengthened, as discussed in the "Experimental Designs or Analyses" section. Specifically, the proposed matrix compression scheme should be compared with both the PUB model and the naïve top-K singular value selection method. 3. Although the double bulk model resembles the empirical spectral distribution, it is unclear why the first bulk is considered the information criterion, rather than the noise criterion. Other Comments Or Suggestions: The text in the figures is too small. It is recommended to increase the font size in the figures for better readability. Questions For Authors: 1. How much improvement in compression quality (e.g., accuracy change vs. compression ratio) does the proposed PDBLS + noise-filtering algorithm offer compared to the PUB model or naïve low-rank compression methods? 2. What are the key motivations for choosing the PDB model over the naïve low-rank compression methods? 3. Could the authors provide insights or a theoretical discussion on why $\beta$ is chosen as the information-noise boundary? Does the double-bulk distribution emerge naturally during the training process? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your recognition and the valuable suggestions. Please find our response below. (**C** for Claims And Evidence, **E** for Experimental Designs, **W** for weakness, **Q** for Questions) **1. Whether the ratio is the best [C]** Our primary goal is to identify the noise-information boundary and remove the noisy eigenvalues. Once the signal eigenvalues are removed, accuracy begins to degrade. Therefore, we aim to locate the point where **accuracy first starts to decline**, rather than the point of significant decline. In Figure 6, a **significant drop in accuracy** indicates that the model is **over-compressed**. The boundary point $\beta$ from PDB better captures the **initial decline points** compared to $\lambda_{+}$from PUB, showing that PDB better aligns with the behavior of eigenvalues of $W$. Moreover, most $\beta$ in Figure 6 are located near the initial decline points, though a few cases are less obvious. This is because $\beta$ is an asymptotic limit based on PDB when the dimension of $W$ tends to infinity. There exists a gap between the asymptotic limit and the finite-sample empirical value. Additionally, the random error of PDB estimate can also contribute to this effect. **2. Additional experiments [E,W2,Q1]** We are sorry for the lack of a detailed comparison and clear explanation in the main text. We include **additional experiments on BERT and T5-base** and compare with **two other methods** : ​ 1. naive SVD (using 0.55 as the empirical compression ratio, Shmalo et al. (2023)), ​ 2. sparse low-rank (SLR) (Sridhar Swaminathan et al., *Neurocomputing*, 2020). The table below presents the accuracy and compression ratio (ratio of rank of $W$ after and before compression). | Network | Noise | Base | PDB | SLR | PUB | naive SVD | | :--------------: | :---: | :---: | :--------------: | :----------: | :----------: | :--------: | | FCNN:MNIST | 30% | 0.801 | 0.808(30.1%) | 0.789(27.3%) | 0.793(13.9%) | 0.803 | | ResNet18:CIFAR10 | 30% | 0.642 | 0.658(15.8%) | 0.619(11.1%) | 0.64 (7.2%) | 0.647 | | VGG16:CIFAR10 | 30% | 0.699 | 0.705(6.4%) | 0.701(9.1%) | 0.686 (4.3%) | 0.7 | | BERT: RTE | 0% | 0.703 | 0.732(2.3%) | 0.717(3.3%) | 0.717 (9.6%) | 0.703 | | BERT: SCITAIL | 0% | 0.906 | 0.916(2.1%) | 0.918(3.3%) | 0.913(12%) | 0.906 | | T5-base: RTE | 0% | 0.717 | **0.754(1.7%)** | 0.725(16.3%) | 0.732(8.1%) | 0.717 | | T5-base: SCITAIL | 0% | 0.921 | 0.924(2.3%) | 0.901(9.8%) | 0.917(4.7%) | 0.92 | | Average | | 0.77 | **0.785(8.67%)** | 0.767(11.5%) | 0.771(8.5%) | 0.771(55%) | Our method achieves the best accuracy. Moreover, Figure 6 presents the parameters for both PDB and PUB. The vertical dashed lines represent the suggested optimal compresion ratio, $\beta$ for PDB and $\lambda_+$ for PUB. **3. Assumptions to produce the bulk+spike phase [W1]** Martin & Mahone (JMLR, 2021) point out that the bulk+spike phase typically emerges when $W$ exhibit **weak self-regularization** during training. To the best of our knowledge, so far there is no rigorous mathematical hypothesis specifying when the bulk+spike phase occurs. It is more of an **empirically observed phenomenon**. Both PUB and PDB provide mathematical frameworks for modeling this phenomenon. **4. About noise-information boundary $\beta$ [W3,Q3]** The entries of the **initial** $W_0$ are pure noise, and $EW_0^TW_0=\sigma_0^2I_p$. The eigenvalues of $W_0^TW_0$ follow a unimodal MP law which contains **no information**. As training progresses, the weight matrix is updated with new gradients, which alters the structure of $EW^TW$. Martin \& Mahoney (JMLR, 2021) conducted extensive analyses on the eigenvalue distribution of $W^TW$ during training. They claim that, as training proceeds, the bulk distribution becomes **bimodal** and **large eigenvalues emerge**, forming the **bulk+spike phase**, see Figures 11,18 in their paper. We provide a theoretical explanation for this phenomenon, which is PDB for $EW^TW$. Correspondingly, we treat the **emerging additional bulk and spikes** as **information**, while the **smaller bulk** is still **noise**. $\beta$ separates the first and second bulk, thus it's chosen as the noise-information boundary. **5. Key motivations for choosing PDB [Q2]** Both PDB and PUB are low-rank compression methods. The key challenge lies in choosing the optimal compression ratio. We first provide a mathematical framework PDB for the eigenvalues in the bulk+spike phase. Then the compression ratio is determined based on their own eigenvalue behavior. Additionally, PDB debiases large eigenvalues to enhance robustness. It can improve accuracy with noisy labels. In contrast, naive methods rely on **empirical heuristics** (same ratio for all matrices) and don't correct for signal eigenvalues, lacking theoretical support. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and the additional experimental results. My main concern was the lack of comparison with SVD-based methods, which has been adequately addressed in the authors' response. I have raised my score to a 4, as I believe PDB makes a meaningful contribution to the problem of layer-wise adaptive low-rank compression. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your kind support and recognition of our work. We are delighted that our response and additional experiments have addressed your concerns, and we are truly grateful for your valuable feedback.
Summary: The paper proposed a population double bulk model for compressing weight matrices in neural networks. Compared to previous pupulation unit bulk model, the PDB model has more parameters and better approximates the weight matrices. Theoretical analysis (drawing tools from random matrix theory) and algorithms are proposed to estimate the parameters in PDB model. Experiments on neural networks shows an accuracy improvement using PDB model compared to using PUB model. Claims And Evidence: Yes. Methods And Evaluation Criteria: Makes sense to me. Theoretical Claims: Looks correct to me. Experimental Designs Or Analyses: Looks reasonable to me. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper discussed the problem of reducing overfitting through the compression of weight matrices, which seems to be of interest. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: The paper provides solid works including proposing an extended model of PUB, providing corresponding theoretical analysis and algorithms to estimate the parameters, and running experiments on neural networks that show improvement. Weaknesses: These are not necessarily weaknesses but more of questions. 1. Is it assumed that the weight matrices are i.i.d.? How accurately does the PUB/PDB model approximate the real weight matrices. Were comparisons between PUB and data-driven methods made in existing literatures? Is it possible to comment on how well PUB/PDB models perform compared to data-driven methods? 2. While it is mentioned that matrix compression is set out to mitigate the overfitting, it seems that no compression performs almost equally well in the experiments and better when the dataset is good enough. 3. Is the matrix compression performed for each layer or specific layers? 4. It was mentioned that in the training process the third phase, the flat-tail phase is less common. Is it the case that most models stop at the spike and bulk phase during the training proccess? Other Comments Or Suggestions: N/A Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your recognition and the valuable suggestions. Please find our response below. (**W** for weakness) **1. Model details [W1]** The entries of the **initial** weight matrix $W_0$ are **i.i.d.** with mean 0 and variance $\sigma^2$, resulting in $\mathbb{E}W_0^TW_0=\sigma_0^2 I_p$. As **training progresses**, PDB assumes that $\mathbb{E}W^TW=\operatorname{diag}\left(\alpha_1, \ldots, \alpha_K, \sigma_1^2, \ldots, \sigma_1^2, \sigma_2^2, \ldots, \sigma_2^2\right)$ , indicating that the entries are **no longer i.i.d.** due to heterogeneous variances. We use the density curve (Figure 3) and spectral moments of $W$ (Tables 3-4) to assess model approximation performance. We also compare the first three theoretical spectral moments with the empirical values on **additional neural networks**. | | | T5-base: RTE | | | Bert: SCITAIL | | | :-------: | :----------: | :---: | :---: | :-----------: | :---: | :---: | | Moments | $\gamma_1$ | $\gamma_2$ | $\gamma_3$ | $\gamma_1$ | $\gamma_2$ | $\gamma_3$ | | PUB | 0.67 | 0.9 | 1.51 | 0.59 | 0.69 | 1.02 | | PDB | 0.77 | 1.55 | 4.17 | 0.67 | 1.18 | 2.78 | | empirical | 0.72 | 1.83 | 5.35 | 0.71 | 1.38 | 3.95 | Actually, the method proposed by Staats et al. (Physical Review E, 2023) is **data-driven** and we provide a detailed comparison in Table 6 of our paper Appendix. We also include **additional experiments on BERT and T5-base** and compare with **two other compression methods** : 1.naive SVD (using 0.55 as the empirical compression ratio, Shmalo et al. (2023)), 2.sparse low-rank (SLR) (Sridhar Swaminathan et al., *Neurocomputing*, 2020). The table below shows the accuracy and compression ratio (ratio of rank of $W$ after and before compression) . | Network | Noise | Base | PDB | SLR | PUB | naive SVD | | :--------------: | :---: | :---: | :--------------: | :----------: | :----------: | :--------: | | FCNN:MNIST | 30% | 0.801 | 0.808(30.1%) | 0.789(27.3%) | 0.793(13.9%) | 0.803 | | ResNet18:CIFAR10 | 30% | 0.642 | 0.658(15.8%) | 0.619(11.1%) | 0.64 (7.2%) | 0.647 | | VGG16:CIFAR10 | 30% | 0.699 | 0.705(6.4%) | 0.701(9.1%) | 0.686 (4.3%) | 0.7 | | BERT: RTE | 0% | 0.703 | 0.732(2.3%) | 0.717(3.3%) | 0.717 (9.6%) | 0.703 | | BERT: SCITAIL | 0% | 0.906 | 0.916(2.1%) | 0.918(3.3%) | 0.913(12%) | 0.906 | | T5-base: RTE | 0% | 0.717 | **0.754(1.7%)** | 0.725(16.3%) | 0.732(8.1%) | 0.717 | | T5-base: SCITAIL | 0% | 0.921 | 0.924(2.3%) | 0.901(9.8%) | 0.917(4.7%) | 0.92 | | Average | | 0.77 | **0.785(8.67%)** | 0.767(11.5%) | 0.771(8.5%) | 0.771(55%) | It can be seen that while the compression raio of PDB lies between PUB and SLR, it achieves the best accuracy, with a **maximum improvement of 5.2%** ($\frac{0.754}{0.717}-1$). **2. About overfitting [W2]** Our primary goal is to **reduce model complexity** while **preserving generalization ability**. During the matrix compression process, some model information is inevitably lost. Our approach aims to **identify the optimal boundary between noise and information**, effectively removing **noisy eigenvalues** to maintain the model's generalization without excessive information loss. When the data is clean, the impact of noisy eigenvalues is minimal, making the difference between compressed and uncompressed matrices negligible. However, when the data quality is poor, the noise embedded in $W$ can negatively affect test accuracy. Since our compression method essentially **filters out noise**, it **enhances generalization** and improves the model’s robustness. Therefore, our method shows a more significant advantage in the presence of noise, whereas when the data quality is good, the difference between compressed and uncompressed models is minimal. **3. Is the matrix compression performed for each layer or specific layers? [W3]** We **selectively compress specific layer matrices**, particularly those where the size of $W$ is sufficiently large. For smaller matrices, the **limited number of eigenvalues** results in **less accurate model estimation**, making compression ineffective. **4. About spike and bulk phase [W4]** Martin & Mahone (JMLR, 2021) conducted a **comprehensive and in-depth analysis** of the eigenvalue distribution of $W^TW$ throughout the training process. They found that when the DNN weight matrices exhibit **weak self-regularization**,**eigenvalue mass shifts to larger values**, forming the Bulk+Spikes phase during the first few epochs . Once the spikes appear, **substantial changes** in the distribution become hard to see.
Summary: This paper presents the Population Double Bulk (PDB) model, an extension of the Population Unit Bulk (PUB) model, for the effective compression of deep neural network weight matrices. The authors leverage a dual-cluster structure to more accurately analyze the eigenvalue distribution and employ the PDBLS algorithm to efficiently estimate the boundary between informative signals and noise. Additionally, they introduce the PDB Noise-Filtering algorithm, which selectively removes unnecessary eigenvalues while preserving crucial information, thereby improving compression performance. The experimental results demonstrate that the proposed PDB-based compression technique can maintain the same test accuracy as existing methods at lower ranks or even enhance generalization performance. Furthermore, the approach exhibits robustness in noisy data scenarios, suggesting its practical applicability in real-world deep learning tasks. ## update after rebuttal My main concern was the lack of evaluation on other models such as LLMs and the comparison with SVD-based methods, which was well addressed in the authors’ reply. For this reason, I have raised my score from 2 to 3. Claims And Evidence: - The paper conducts LSD (eigenvalue distribution) comparison experiments, asserting that the PDB model provides a more accurate representation of the weight matrices in real neural networks than the PUB model. - The study suggests that the PDB model can enhance generalization performance by regulating the Lipschitz constant. - While the paper claims that PDB-based compression outperforms the PUB-based approach, a more comprehensive comparative analysis with competing methods is necessary to elucidate the reasons behind its superior performance. Methods And Evaluation Criteria: - To address the limitations of the existing PUB model, the authors introduce the Population Double Bulk (PDB) model and develop the PDBLS algorithm and PDB Noise-Filtering algorithm based on it. - Notably, the proposed approach effectively analyzes the eigenvalue distribution of neural networks and applies spectral analysis using Random Matrix Theory (RMT) to provide theoretical justification for the compression method. - However, since the evaluation is limited to a comparison with the PUB model, the study lacks a broader assessment against state-of-the-art compression techniques. - Additionally, it remains unclear whether the proposed method is optimized specifically for image data or if it can be generalized to other modalities, such as speech or text data. Further experiments on diverse datasets would strengthen the claims regarding the method’s applicability. Theoretical Claims: - While the PUB model assumes a single variance (σ²), the PDB model extends this assumption by incorporating two bulk distributions, claiming to better model general cases. However, additional experimental and theoretical validation is required to determine whether the number of bulks is inherently limited to two in real-world data. - The study leverages Random Matrix Theory (RMT) to demonstrate that the eigenvalue distribution in the PDB model follows a specific equation, ensuring theoretical consistency. However, further empirical validation on diverse neural network architectures is necessary to confirm the practical applicability of the proposed approach. Experimental Designs Or Analyses: - The paper evaluates the PDB model against the PUB model using a range of neural networks (FNN, ResNet18, VGG16) and benchmark datasets (MNIST, CIFAR-10, ImageNet). - To assess generalization, experiments were also conducted on noisy data. - However, the study focuses solely on CNN-based architectures, leaving open the question of whether the PDB model would perform similarly well in Transformer-based models like BERT or ViT. - Additionally, the method has only been tested on computer vision tasks, and its effectiveness in other domains, such as NLP or speech recognition, remains unexplored. Supplementary Material: - The paper provides additional analysis of the experimental results, offering a more detailed comparison between the PDB model and the PUB model. - Table 6 quantitatively demonstrates that the PDB compression method preserves test accuracy better than PUB across various neural network architectures and datasets. - Appendix B supports this claim theoretically, using spectral moment analysis to show that the PDB model approximates the eigenvalue distribution of real weight matrices more accurately than the PUB model. Relation To Broader Scientific Literature: - Previous research has used the Population Unit Bulk (PUB) model to analyze the eigenvalue distribution of weight matrices, but its single-variance assumption fails to fully capture the complexity of real neural networks. - To address this, the paper introduces the Population Double Bulk (PDB) model, which considers two bulk distributions, allowing for a more realistic representation of eigenvalue behavior. - However, further comparative experiments with existing compression methods are needed to better assess the practical performance of the PDB model. Essential References Not Discussed: - None Other Strengths And Weaknesses: Strengths - This paper expands the spectral analysis of neural network weight matrices by introducing the PDB model, which provides a more accurate explanation of eigenvalue distributions compared to the existing PUB model. - By applying Random Matrix Theory (RMT), the study establishes a theoretical foundation for understanding eigenvalue behavior and introduces the PDBLS algorithm, which enables a more precise separation between informative signals and noise. - The effectiveness of the proposed approach is demonstrated through experiments on a variety of neural network architectures (FNN, ResNet18, VGG16) and benchmark datasets (MNIST, CIFAR-10, ImageNet), highlighting its practical applicability. - The PDB model shows strong generalization performance, even in the presence of noisy data, and in some cases, it improves over existing methods, suggesting its potential to help mitigate overfitting. Weaknesses - The study does not include a direct comparison with other well-established neural network compression techniques (e.g., SVD), making it difficult to determine whether the PDB-based approach offers a clear advantage. - The experiments are limited to CNN-based architectures, leaving it unclear how well the PDB model performs on other structures, such as Transformers (BERT, ViT). - The method has only been tested on computer vision tasks, so its applicability to other domains, such as natural language processing (NLP) and speech recognition, remains uncertain. - (Optional) The paper does not discuss the computational efficiency of the compressed models (FLOPs reduction, inference speed, memory savings), making it hard to assess their practical benefits. Other Comments Or Suggestions: - None Questions For Authors: - Can the authors provide experiments on how well the PDB model performs on other architectures such as Transformers (BERT, ViT)? - Could the authors provide experiments on application to other domains such as Natural Language Processing (NLP)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your recognition and the valuable suggestions. Please find our response below.(**T** for Theoretical Claims, **W** for weakness, **Q** for Questions) **1. Comparison with additional methods [W1]** We include additional experiments on two other methods: ​ 1. naive SVD (using 0.55 as the empirical compression ratio, Shmalo et al. (2023)), ​ 2. sparse low-rank (SLR) (Sridhar Swaminathan et al., *Neurocomputing*, 2020). Table below shows accuracy and compression ratio (ratio of rank of $W$ after and before compression). | Network | Noise | Base | PDB | PUB | Naive SVD | SLR | | ---------------- | :---: | :---: | :--------------: | :-----------: | :---------: | :-----------: | | FCNN:MNIST | 0% | 0.98 | 0.98 (15.7%) | 0.979 (8.6%) | 0.98 | 0.98 (15.6%) | | | 30% | 0.801 | 0.808 (30.1%) | 0.793 (13.9%) | 0.803 | 0.789 (27.3%) | | ResNet18:CIFAR10 | 0% | 0.835 | 0.838 (21.5%) | 0.834 (10.4%) | 0.835 | 0.836 (26%) | | | 30% | 0.642 | 0.658 (15.8%) | 0.64 (7.2%) | 0.647 | 0.619 (11.1%) | | VGG16:CIFAR10 | 0% | 0.842 | 0.842 (13.5%) | 0.841 (5.5%) | 0.842 | 0.842 (13%) | | | 30% | 0.699 | 0.705 (6.4%) | 0.686 (4.3%) | 0.7 | 0.701 (9.1%) | | **Average** | | 0.800 | **0.805(17.2%)** | 0.795 (8.3%) | 0.801 (55%) | 0.794 (17.2%) | **2. Additional experiments on LLM [W2,W3,Q1,Q2]** We include additional experiments on BERT, ViT, and T5-base. Table below presents accuracy and compression ratio. SLR performs poorly on VIT(accuracy 0.1), thus omitted. | Network | Base | PDB | SLR | PUB | naive SVD | | :--------------: | ----- | :--------------: | ------------- | ------------ | ----------- | | BERT: RTE | 0.703 | 0.732 (2.3%) | 0.717 (3.3%) | 0.717 (9.6%) | 0.703 | | BERT: SCITAIL | 0.906 | 0.916 (2.1%) | 0.918 (3.3%) | 0.913 (12%) | 0.906 | | T5-base: RTE | 0.717 | 0.754 (1.7%) | 0.725 (16.3%) | 0.732 (8.1%) | 0.717 | | T5-base: SCITAIL | 0.921 | 0.924 (2.3%) | 0.901 (9.8%) | 0.917 (4.7%) | 0.92 | | VIT-L: DTD | 0.745 | 0.753 (11%) | - | 0.748 (8.2%) | 0.741 | | VIT-L: SUN397 | 0.768 | 0.777 (13.4%) | - | 0.772 (11%) | 0.772 | | **Average** | 0.793 | **0.809 (5.5%)** | - | 0.800 (8.9%) | 0.793 (55%) | **3.Computational efficiency [W4]** The table below compares the computational efficiency, with time measured in seconds for total execution and inference after data processing on NVIDIA L40 GPUs (Ubuntu 22.04). | DNN | PDB | PUB | naive SVD | SLR | | :-----: | :------: | :--: | :-------: | :--: | | VGG16 | 42 | 41 | 47 | 53 | | Resnet | 19 | 15 | 18 | 23 | | VIT | 236 | 231 | 243 | 256 | | T5-base | 47 | 44 | 45 | 42 | | Bert | 20 | 15 | 18 | 29 | | Average | **72.8** | 69.2 | 74.2 | 80.6 | **4. Validity of PDB[T1]** **On the Empirical Side**: Our motivation is that the spectral distribution of $W^TW$ does not perfectly align with PUB. This led us to consider more general models. Since the size of $W$ is typically too small to guarantee an accurate estimation of a continuous population bulk, we opted for a discrete surrogate of a M-bulk model ($M\geq2$). However, through extensive experiments (Section 5 included), we found that the proportions of **bulks beyond M=2** are nearly negligible (Table 1). Therefore, we adopt PDB as a practical and effective representation. **On the Theoretical Side**: The entries of the initial $W_0$ are pure noise, and $EW_0^TW_0=\sigma_0^2I_p$. The eigenvalues of $W_0^TW_0$ follow unimodal MP. As training progresses, $W$ is continuously updated with new gradients, altering the structure of $EW^TW$. Martin \& Mahoney (JMLR, 2021) conducted extensive analyses on the eigenvalue distribution of $W^TW$ during training. They claim that, as training proceeds, eigenvalue distribution becomes bimodal (Figures 11,18 in their paper). We provide a theoretical explanation (PDB for $EW^TW$) for this phenomenon and establish a systematic compression framework. **5. Additional moments comparison[T2]** We compare the first 3 theoretical spectral moments of $W$ with the empirical values on additional neural networks. | | | T5-base: RTE | | | Bert: SCITAIL | | | :-------: | :----------: | :---: | :---: | :-----------: | :---: | :---: | | Moments | $\gamma_1$ | $\gamma_2$ | $\gamma_3$ | $\gamma_1$ | $\gamma_2$ | $\gamma_3$ | | PUB | 0.67 | 0.9 | 1.51 | 0.59 | 0.69 | 1.02 | | PDB | 0.77 | 1.55 | 4.17 | 0.67 | 1.18 | 2.78 | | empirical | 0.72 | 1.83 | 5.35 | 0.71 | 1.38 | 3.95 | --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and the additional experimental results. My main concern was the lack of evaluation on other models such as LLMs and the comparison with SVD-based methods, which was well addressed in the authors’ reply. For this reason, I have raised my score from 2 to 3. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your valuable comments and the time you dedicated to reviewing our work. We are delighted that our response and additional experiments have addressed your concerns. Thank you for recognizing our revisions and for your kind support.
null
null
null
null
null
null
Improving the Diffusability of Autoencoders
Accept (poster)
Summary: This paper finds that the pre-trained VAE for visual generation exhits larger high-frequency components than the original RGB images, in the lens of spectral analysis using 2D DCT. To improve the latent diffusion generative modeling, the authors proposed to align the spectral property of image latents with that of RGB images by enhancing the VAE reconstruction ability of the low-frequency signals. Results on image generation and video generation show that the proposed Scale Equivariance regularization improves the generation quality. Claims And Evidence: The claim below lacks verification or literature support: We also hypothesize that higher frequencies components are harder to model than lower frequency components for the following reasons and thus should be avoided: (i) they have higher dimensionality; (iii) they are more susceptible to error accumulation over time Reasons: * High-frequency components of images usually have a lower entropy than the low-frequency components, thus image high-freq signals should be easier to model since they are intuitively close to 0 (this is also why JPEG removes high-freq signals for compression). * It is not clear to me why high-freq signals have higher dimensionality * It is not clear why high-freq signals are more susceptible to error accumulation Methods And Evaluation Criteria: The authors proposed a regularization term (Scale Equivariance) on the vanilla VAE loss, which is easy to understand. But, different from the explanation in the paper, I think the principle of SE is that it explicitly enhances the reconstruction of the image's low-frequency signals. The evaluation criteria FID, FVD, PSNR, SSIM, and LPIPS are commonly used in the community. Theoretical Claims: no theoretical claim Experimental Designs Or Analyses: The experimental designs and analyses are overall sound and convincing. The authors verify their methods on image generation, video generation, and autoencoder reconstruction. One suggestion: it is better for the authors to add an experiment to show the effect of Scale Equivariance by training from scratch. (The authors only verified SE by fine-tuning the pre-trained VAE) Supplementary Material: I have reviewed B. Additional exploration, where the authors explored a fine-grained version of Scale Equivariance Relation To Broader Scientific Literature: The key contribution of this paper is improving the VAE used in image generation and video generation. Specifically, SD-VAE, FluxAE, and CogVideoX-AE are basic components for high-resolution image generation and video generation. Essential References Not Discussed: the references are well discussed Other Strengths And Weaknesses: `Strengths`: * The paper is easy to follow. The analysis, motivation and method are overall sound and well-organized. * Experiments are well-designed, and the results are promising. * The proposed regularization method is potentially useful for the community. `Weaknesses` * It is not clear why reducing the high-frequency of the latent space can improve the generative modeling. The authors provide the hypothesis but lack sufficient verification. I encourage the authors to explore it from the entropy of the high-freq coefficients in both RGB space and DCT space, where the distribution of high-freq components in the RGB space should have low entropy. * It is better to show the effect of Scale Equivariance when training the VAE from scratch, as a supplement to fine-tuning. * It seems not obvious from Figure 5 that Scale Equivariance preserves more content compared to the baseline. * Presentation-wise: (1) It is better to explain in detail why stronger KL regularization leads to a larger high-freq latent expression (random noise of the latent codes). (2) CosmosTokenizer employs wavelet transform mainly for compression according to their paper; (3) Equation 2 should have a scaler for the term of Scale Equivariance. Other Comments Or Suggestions: already stated above Questions For Authors: Q1: In equation 1, $A_{u, v}$ should be <=1 after normalizing by $D_{0, 0}$, but why there are some components still > 1 in Figure 2 and Figure 3? Q2: Why 'As the number of channels in the autoencoder’s bottleneck increases, high-frequency components become more pronounced'? Q3: What are the number of sampling steps used in Table 1? what are the results when using different sampling steps? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer’s valuable feedback and constructive recommendations. Below, we systematically respond to each issue highlighted. We will ensure comprehensive incorporation of all suggestions into our manuscript. > It is not clear why high-frequency components have higher dimensionality. Reviewer JqrQ has raised the same concern, and due to the limited space, we politely refer to our argumentation in that other response. > It is not clear why high-frequency components are more susceptible to error accumulation. Similarly to the previous question, we kindly refer the reviewer to our response to Reviewer JqrQ on the same matter. > Can SE be helpful just because it improves the reconstruction of low-frequency components? Not quite. PSNR/SSIM scores are extremely sensitive to low-frequency reconstructions, and all the autoencoders (vanilla, fine-tuned and fine-tuned+SE) perform on par in terms of these metrics. > From-scratch training. Due to the space limit, we are again forced to refer the reviewer to our response on the same question to Reviewer MRKn. We apologize for this inconvenience. > The improved performance can be due to a different reason: high-frequency components of images have a lower entropy and should be easier to model. In this [[plot]](https://anonymous.4open.science/r/ae-diff-icml-rbtl-815F/figures/entropy.png), we visualize the entropy of the latents for regularized/non-regularized FluxAE at various frequencies, and also RGB. Entropy is computed by obtaining a histogram of 100 bins for each frequency and its corresponding density. While for images space, high-frequencies indeed have lower entropy, for the latents, HF entropy exhibits a much flatter or an even slightly increasing profile indicating that they might be harder to model. We note that while higher entropy, intuitively, should lead to harder modeling of a distribution, it does not necessarily affect the quality of the final samples obtained from passing the latents through the decoder. Higher frequencies with smaller scale can still exhibit high entropy while our SE reduces the dependence on these frequencies. > Figure 5 does not show that SE preserves more content compared to the baseline. We try being gentle about enforcing SE regularization not to affect the reconstruction quality. Figure 5 shows that our regularized AE does not introduce spurious high frequencies when they are chopped off in the latents. This also results in meaningful improvements in the corresponding reconstruction metrics as confirmed in Figure 8. Other examples (e.g., [[this one]](https://anonymous.4open.science/r/ae-diff-icml-rbtl-815F/figures/firetruck-freq.png)) can show this effect more noticeably. > [3 writing issues] We fully agree with the remarks and will incorporate them in the manuscript. > Why are some amplitudes greater than 1 after normalization by D_{0,0} in Figures 2 and 3? D_{0,0} corresponds to the lowest frequency, which is basically the mean of all the values. It might not necessarily have the largest amplitude compared to other frequencies (though this rarely happens in natural signals). For example, [[this figure]](https://anonymous.4open.science/r/ae-diff-icml-rbtl-815F/figures/freqs01-example.png) shows an example of an image which has 0 amplitudes everywhere, except for the “[[0,1]]” spatial frequency. > Why do high-frequency components become more pronounced as bottleneck channels increase? This is an interesting question. We hypothesize that increasing the autoencoder's bottleneck channels enables the model to better capture finer, high-frequency details. Initially, with limited capacity, the encoder prioritizes smoother, low-frequency information. As capacity grows, it encodes additional high-frequency details, which are unique and informative. However, without explicit regularization promoting frequency-based disentanglement, these high-frequency components distribute across channels in an unstructured manner. Thus, higher-dimensional bottlenecks enhance high-frequency representations but do not yield systematic frequency-specific disentanglement per channel. We will clarify this point in the revised manuscript. > Results for various numbers of steps. We generated the Table 1 results using 256 steps (L#318). Additional plots for FluxAE/CogVideoX-AE/LTX-AE across step counts (16,32,64,128,256) for FID/DinoFID and FVD (images: 50K samples, videos: 10K samples) are provided [[here (see the neighboring folders as well)]](https://anonymous.4open.science/r/ae-diff-icml-rbtl-815F/num-steps-plots/image_flux_dit-xl/2_fid-dinov2.png). Regularized autoencoders consistently improve diffusability. We will include these results (+ DiT-XL/2 for CogVideoX-AE) in the final paper. > Extended KL influence discussion. We provided an expanded KL discussion (noise injection, relevant literature, RGB+noise spectrum analysis) in response to Reviewer tijt. We welcome any additional suggestions from the reviewer. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. Some of my doubts and concerns have been solved. Regarding the claims in the paper, I strongly suggest the authors to improve them by adding rigorous verification/rephrasing with proper citations, because these can be potentially important and interesting insights. Considering the contribution of this paper to the community, I maintain my rating and lean toward accepting the paper.
Summary: The paper observes higher frequency component in VAE's latent space than those in normal RGB images and these high frequency components have greater magnitude with larger channels and stronger KL regularization. Therefore, it proposes a novel regularization technique -- scale equivariance (SE) -- to improve the diffusibility of VAE. Specifically, SE suppresses the high-frequency component in latent space by introducing an additional loss for the ground-truth downsampled image $\tilde{x}$ and reconstructed corresponding downsampled latent vector $\text{Dec}(\tilde{z})$. Extensive experiments have been conducted on different VAE to demonstrate the effectiveness of the proposed method. Claims And Evidence: The effectiveness of the proposed method is well supported in the experiment section. Nevertheless, I have the following questions: 1. For hypothesis (i) in line 210 (right column), how do the results in Figure 4 imply that high-frequency components are higher dimensional? Adding a new loss does not necessarily increase the maximum dimensionality VAE can model. 2. For hypothesis (ii), how does Figure 5 show higher frequencies are generated only in the final steps of sampling? I thought Figure 5 only considers the reconstructed images with VAE and there is no any sampling in diffusion. 3. For hypothesis (iii), how does Figure 6 demonstrate higher frequencies are more susceptible to error accumulation over time? Is it possible to provide more quantitive evidences for this claim (rather than sampled noisy images during the diffusion denoising process)? 4. Empirically, it is observed using scale equivariance can improve the performance of diffusion model on different metrics. But I do not find any quantitive results that illustrate generation performance or the diffusibility of VAE is negatively correlated with the presence of higher frequencies components? It is possible that scale equivariance might interact with other factors in latent space of VAE and inadvertently improve the generation performance of the ultimate generative model. Methods And Evaluation Criteria: The paper adopts standard metrics for VAE and diffusion models. But I have some important concerns regarding the computation budget of baseline and scale equivariance. 1. Regarding equation (2), if I understand correctly, the additional loss term regarding the downsampling introduces additional gpu memory cost when doing forward and backward. Can you provide the flop of each VAE training / fine-tuning iteration? I am wondering whether it is fair to compare with baseline without scale equivariance given they might use different compute budget with the same number of iterations. 2. Also, is it possible to adapt dynamic downsampled latent vectors and images during the training rather than introducing a new loss term for it? In other words, we do not use the first term in Equation 2, and vary the downsampled ratio for each batch images and vectors (of course, we still need to do one forward to get latent z with original x). Can we still achieve better performance than the original baseline? 3. Can we introduce a coefficient to balance the weight between original reconstruction objective and downsampled one? What is the optimal coefficient (empirically)? Theoretical Claims: This paper does not introduce any new theory or proof. Experimental Designs Or Analyses: The paper has conducted extensive evaluations using various VAE models, including FluxAE, CogVideoX-AE, and LTX-AE and across different datasets, including ImageNet-1K and Kinetics-700. In all cases, the dataset (in-the-wild data) used to train VAE is different from the one training diffusion model. What if they are the same? Do scale equivariance still help filter out higher frequencies and improve generation performance? Supplementary Material: The authors do not provide any supplementary material, which is not an issue for me. Relation To Broader Scientific Literature: The proposed method is both simple and novel. I am not aware of any prior work using downsampled image to reduce high frequency components for VAE. Essential References Not Discussed: I do not find any important yet missing cited works. Other Strengths And Weaknesses: See above for each section. Other Comments Or Suggestions: I do not have major comments and suggestions other than questions listed above. Questions For Authors: I have listed all my questions in each part. While the proposed method does improve the performance, my major concern is (1) whether the motivation of the method (suppressing high-frequency components) leads to the improving performance (2) whether it is fair to compare with original baseline given the same iteration with possibly additional computation budget from a new loss term (3) whether it is possible to simplify the method using reconstruction loss with dynamic downsampled ratio or further improve the generation performance with weights balancing two reconstruction losses. If the authors can resolve some of my concerns (mostly about 1, 2). I am happy to increase my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable remarks. Below, we address each raised concern. > Why high-frequency components have higher dimensionality? We agree this could be clearer. By "low-frequency components," we mean DCT coefficients required to reconstruct a feature map downsampled by factor k per spatial dimension. High-frequency components are remaining coefficients needed to fully reconstruct the original map. Since DCT coefficients scale quadratically with k, we have: Number of high-frequency components = (k² − 1) × Number of low-frequency components For k=2 (our experiments), high-frequency components thus have three times higher dimensionality. We'll clearly state this in the final version. > It is unclear why higher frequencies are only generated in the final steps. This claim originates from prior literature (Rissanen et al., "Generative Modelling With Inverse Heat Dissipation"; also "Diffusion is spectral autoregression" and DCTDiff). Intuitively, high-frequency components are too easily erased by noise in the early denoising steps, which is why the model can only pick them up for smaller noise levels, which relate to the final denoising steps. We will extend the discussion of this topic in the earliest revision. > It is unclear why higher frequencies are more prone to error accumulation. This observation is also an already explored phenomenon in the broader diffusion literature. A solid reference we can point to is Li et al. “On Error Propagation of Diffusion Models” (ICLR 2024) with Figure 2 of their paper serving as a clear illustration of such behaviour. Intuitively, the denoising process of a diffusion model can be represented as an ODE trajectory. And ODEs are known to accumulate errors very quickly even for simple processes (e.g., even for the simplest y’ = y equation, forward Euler solver would accumulate the error proportional to h² with each step of size h, and this error compounds exponentially as steps progress). Then, since diffusion models generate high frequencies later in the trajectory (per our previous response), the error accumulation is amplified specifically for them. > Could SE interact with other AE properties, rather than spectral properties alone? In Table 6 of the appendix, we provide the ablation for direct *high frequency chop-off*: it only has influence on spectral properties (and this influence is the same as for SE), eliminating the interaction of other factors. It noticeably improves the diffusability, but we opt for the equivalent SE regularization since it’s much simpler and less error-prone to implement and should be easier to adopt by the community. > Computational cost and potentially unfair AE training budget comparison. We measured the FLOPs of FluxAE (for the batch size of 1 and resolution of 256²) using [[fvcore]](https://github.com/facebookresearch/fvcore). The entire encoder-decoder pass has 447 GFLOPS, and is split between the encoder/decoder as 136 vs 311 GFLOPS. Our regularization reuses the encoder pass and only runs the decoder with x2 or x4 reduced resolution (the scale sampled randomly during training). This results in 77.6 or 19.4 extra GFLOPs of the decoder, which is almost exactly 1/4 or 1/16 of the decoder compute or +17% or 4.5% of the total forward pass. Since we sample x2 or x4 downsampling factor equally randomly, this results in ~10.75% of total FLOPs overhead for our regularization. To strengthen our point even further, we ran an experiment where the baseline FluxAE was fine-tuned strictly for 2 times longer (for 20K instead of 10K iterations). The resulted DiT-B/2 model achieved FID@5k and DinoFID@5k of only 33.99 and 642.7 vs the corresponding metrics of 25.87 and 551.27 of our FluxAE+FT-SE, fine-tuned for only 10K steps. > Balancing reconstruction and SE regularization. We appreciate the raised concern and we provide the ablation over the SE strength in the table [[here]](https://anonymous.4open.science/r/ae-diff-icml-rbtl-815F/figures/se-reg-ablation.png). We remain cautious about reducing the influence of the main reconstruction term not to lose in terms of the reconstruction quality. > Training AE and LDM on the same dataset. We conducted from-scratch FluxAE and CogVideoAE training on ImageNet/Kinetics datasets, followed by DiT-B/2 training, as detailed in our response to Reviewer MRKn. In both cases, SE regularization consistently improved results. > Unclear if suppressing high frequencies motivates improved performance. We emphasize that merely suppressing high frequencies is insufficient; it is also crucial to prevent the decoder from arbitrarily amplifying them (we provide a detailed discussion on this in Sec. 3.3). This ensures alignment between the diffusion process’s strength—its ability to generate low-frequency components—and human perception, which is more tolerant to errors in low frequencies. Our experiments with direct frequency chop-off (described above) articulates this more explicitly. --- Rebuttal Comment 1.1: Comment: I appreciate the rebuttal from the authors. My main concerns about the additional computation budget brought from SE and other questions regarding the validity of claims in the paper have been addressed, so I will update my score to 3. I encourage the author to move the results of Table 6 in Appendix (or part of them) to main text. The results motivate the paper more intuitively as it only removes high-frequency without introducing any additional cofounding factors (that can inadvertently improve VAE's performance). I also think it might be worth trying to randomly downsample the latent vector z during training instead of introducing an additional loss in SE.
Summary: This paper explores the latent spaces of autoencoders within latent diffusion models (LDMs), specifically examining spectral discrepancies between latent and RGB spaces. The authors introduce the concept of diffusability, which quantifies how effectively a distribution can be modeled by a diffusion process. They hypothesize that high-frequency components in the latent space degrade diffusability, thereby reducing both the efficiency and generation quality of LDMs. To mitigate this issue, they propose a scale equivariance regularization strategy, which enforces spectral alignment between the latent and RGB spaces by removing high-frequency components. Empirical evaluations demonstrate that this approach improves image generation performance by 19% and video generation by 44% compared to existing LDMs. Claims And Evidence: The authors’ hypothesis is grounded in the spectral analysis of latent and RGB spaces, as presented in Figures 2 and 3. However, certain aspects of their analysis remain unclear. Figure 2: The caption lacks clarity regarding whether the spectra of various channels in FluxAE correspond to reconstructions or latent representations. The distinction between “comparison between the reconstructions” and “the latent space of an autoencoder” is not explicitly addressed, leaving ambiguity in the interpretation of the results. Additionally, given the limited scope of the analysis (only applied on FluxAE), this claim appears insufficiently substantiated. The absence of empirical verification across multiple architectures weakens the argument that this trend is a fundamental characteristic of diffusion models in general. Figure 3: The authors claim in Section 3.2 that “higher KL regularization introduces more high frequencies”. However, the figure does not provide clear evidence supporting this statement. No explicit trend is observed between the scale of KL regularization and the power of high-frequency components, contradicting the claim made in the text. This lack of alignment between theoretical justification and empirical results raises concerns about the robustness of the proposed hypothesis. Given these limitations in the spectral analysis, the motivation for the proposed regularization technique appears to be based on a weak foundation. The authors do not provide sufficiently clear or convincing evidence to establish a strong causal link between their observations and the claimed effects on diffusion model performance. A more rigorous analysis, including evaluations across multiple architectures and additional spectral studies, would be necessary to substantiate their claims. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense. However, a more comprehensive evaluation across multiple architectures for image generation would be necessary to substantiate the claim that their method effectively improves diffusability in LDMs more broadly. As the strength of the regularization is controlled, mentioned to be 0.25 in the paper, the scale factor for scale equivariance term should be included. Theoretical Claims: NA Experimental Designs Or Analyses: The experimental design seems to be well-constructed and appropriate for the research objectives. The authors effectively evaluate their method on both image and video generation tasks, which demonstrates the broader applicability of their proposed regularization technique. Furthermore, the ablation study provides valuable insights by comparing the autoencoder's reconstruction quality with and without the regularization term, highlighting the impact of their approach on the model’s performance. However, there is still an issue with the unclear trend in Table 3, where the relationship between the KL regularization scale factor and high-frequency components is not clearly demonstrated. Supplementary Material: NA Relation To Broader Scientific Literature: The authors propose a regularization method that truncates these high frequencies, drawing from earlier work on spectral regularization techniques. By aligning the spectral properties of latent and RGB spaces, their approach improves generation quality, offering a novel perspective on latent space manipulation in generative models. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: 1. As the goal of this paper is to improve diffusability, could you clarify whether the term diffusability encapsulates both the efficiency and generation quality of diffusion models? Specifically, does an increase in diffusability directly imply an improvement in generation quality, as suggested in your results? If not, are there any proposed methods or metrics to quantitatively assess diffusability in the context of latent diffusion models? 2. As I mentioned above, in your analysis, the hypothesis regarding the impact of high-frequency components on diffusability is derived from spectral properties. Could you provide a more detailed explanation of how the spectral analysis led to this hypothesis? Specifically, what were the key observations or trends that informed this conclusion? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments, which have greatly improved our work. Below, we address each concern. > Ambiguous caption in Figure 2 Figure 2 shows spectra of latent codes from real images encoded with from-scratch trained FluxAE with varying bottleneck sizes. We clarified the caption accordingly. > Exploring influence of bottleneck dimensionality on high-frequency components with more architectures Upon closer inspection, we discovered a broadcasting issue in our spectrum computation pipeline, which led to distorted plots in the original submission. Updated FluxAE results are [[here]](https://anonymous.4open.science/r/ae-diff-icml-rbtl-815F/figures/fluxae-ch.png). We further added analyses for two additional architectures: [[WanAE]](https://anonymous.4open.science/r/ae-diff-icml-rbtl-815F/figures/wanae-ch.png) (which has the same bottleneck dimensionalities as CogVideoX-AE, but considerably faster to train) and [[LTX-AE]](https://anonymous.4open.science/r/ae-diff-icml-rbtl-815F/figures/ltxae-ch.png), trained with increasing bottleneck channel sizes. As one can see from these figures, autoencoders with higher channel sizes tend to possess high-frequencies of larger relative magnitude. > Fig. 3 does not show clear KL-high-frequency trend The FluxAE KL plot previously had the same broadcasting issue; corrected results are [[here]](https://anonymous.4open.science/r/ae-diff-icml-rbtl-815F/figures/fluxae-kl.png). We included similar analyses for [[WanAE]](https://anonymous.4open.science/r/ae-diff-icml-rbtl-815F/figures/wanae-kl.png) and [[LTX-AE]](https://anonymous.4open.science/r/ae-diff-icml-rbtl-815F/figures/ltxae-kl.png). These demonstrate a clearer relationship between KL and high-frequency energy. The influence of KL on the high-frequency spectrum can be attributed to its role in injecting random noise during the encoding stage—an effect present both during training and inference. As KL regularization increases, so does the level of injected noise. Since random Gaussian noise has a uniform power spectrum, this flattens the frequency distribution, disproportionately inflating the high-frequency tail. We illustrate this in [[this plot]](https://anonymous.4open.science/r/ae-diff-icml-rbtl-815F/figures/rgb-noise.png), where progressively increasing Gaussian noise (added to normalized RGB signals in [[–1, 1]]) results in a visible elevation of the high-frequency content. A similar effect was also independently observed in concurrent work on [[SwD distillation (Figure 1)]](https://arxiv.org/pdf/2503.16397). This reveals a nuanced trade-off: KL regularization aligns latents to the standard normal prior, facilitating downstream diffusion (as noted by LSGM), yet increases high-frequency energy, potentially hindering diffusability. We believe this trade-off warrants further attention. > Eq. 2 missing SE regularization loss weight We thank the reviewer for pointing this out and corrected the equation. > Table 3 unclear relationship between KL strength and high-frequency components There may be a misunderstanding: Table 3 illustrates KL's negative effect on diffusability, whereas its impact on high-frequency energy appears in Figure 3. To strengthen this analysis, we added results from DiT-L [[here]](https://anonymous.4open.science/r/ae-diff-icml-rbtl-815F/figures/new-kl-table.png). This table shows increased KL generally boosts small-model LDM performance, but at the expense of poorer reconstruction and stability, limiting scalability (consistent with SD3 findings). In contrast, our regularization improves LDM performance without harming reconstruction, scaling well to larger models. > Clarify “diffusability”: does it include efficiency and quality, and are there metrics? We use "diffusability" strictly for generation quality, independent from efficiency. All else equal, increased diffusability should enhance LDM generation quality. Unfortunately, we found no reliable quantitative metrics correlating consistently with diffusability, including diffusion loss magnitude or decoder Lipshitz constants (as theoretically connected via Eq. 28 in [[LFM]](https://arxiv.org/pdf/2307.08698)). We believe it is still important to introduce such a term to the community to bring attention to this property, since it remains largely ignored these days. > Clarify how spectral analysis suggested high-frequency components impact diffusability We initially sought high-compression autoencoders but quickly found that increasing bottleneck size significantly reduces diffusability, keeping other factors constant. Then, we concurrently worked on cascaded latent diffusion pipelines, which required autoencoders to support downsampling, motivating us to explore their spectral characteristics. This revealed how increased bottleneck dimensionality relates to diminished diffusability, informing our central hypothesis. We would greatly welcome any further comments the reviewer might have. --- Rebuttal Comment 1.1: Comment: I appreciate the author's constructive response. My primary concern was the unclear trends in the spectral plots, which limited the credibility of the paper's main claims. The authors identified an issue and provided corrected plots, which improved the reliability and clarity of the results. In addition, I acknowledge the authors' effort to strengthen the generalizability of their findings by incorporating results from additional autoencoder architectures (WanAE and LTX-AE). Since the ambiguous parts of the paper have been appropriately clarified, I am raising my score to 3.
Summary: The authors analyze the latent space of autoencoders widely used for latent diffusion models and identify that the spectrum of autoencoders typically deviate from that of natural images. In particular, latent spaces have stronger high frequency components compared to RGB images. These high frequency components will be challenging for the diffusion model to learn and could impede performance. The authors propose a simple regularization to align the spectrum of the latent space with that of RGB images. The authors train the autoencoder to reconstruct a downsampled version of the RBG image from a downsampled latent code. This removes the high frequency components from both the RGB image and latent and enforces scale equivariance. The authors present results across image and video generation and demonstrate that this autoencoder regularization improves the downstream performance of diffusion models. Extensive comparisons with KL regularization are presented as well as additional spectral regularization methods in the appendix. Claims And Evidence: The claims are presented by clear and convincing evidence. The spectral analysis, in particular, is insightful and clearly motivates their proposed approach. The authors present comprehensive analysis of the effect of their regularization on both the autoencoder and the downstream generative models. The comparison against KL regularization, the current standard, is comprehensive. Methods And Evaluation Criteria: The authors are concerned with improving the suitability of autoencoders for downstream generative modeling. The authors evaluate primarily on ImageNet 256 and Kinetics 700 which are suitable benchmarks for latent diffusion modeling. Their evaluation of their autoencoders (reconstruction metrics, spectral analysis) and generative models (FID) is convincing. Theoretical Claims: The authors do not present any proofs. Experimental Designs Or Analyses: The experimental design is sound and the experimental comparisons are fair. One limitation is that the authors always start from an existing high-quality autoencoder and then fine-tune it with their additional regularization. While much more computationally feasible, this raises the question of whether their regularizer conveys the same benefit when training an autoencoder from scratch. I do not think that this is a big limitation as their method can always be introduced towards the end of training to improve align the latent spectrum. However, it does limit the ramifications of their findings somewhat. When training a new autoencoder, it is not entirely clear what the optimal procedure is. Supplementary Material: I did review the supplementary material. I appreciated the additional discussion of more sophisticated spectral regularization techniques. Relation To Broader Scientific Literature: While diffusion models, and latent diffusion models in particular, have exploded in popularity, I think there has been comparatively less focus on what makes a "good" autoencoder for latent diffusion. People often use publicly available autoencoders for which the training decisions may not be entirely transparent. I think that this area is currently under-explored, and this paper is a welcome remedy to that. Essential References Not Discussed: The discussion of related work is comprehensive. Other Strengths And Weaknesses: Strenghts: 1. This work focuses on an under-explored problem: What makes a "good" autoencoder for latent diffusion? I think that studying this problem is challenging in part because evaluation requires both training an autoencoder and a downstream diffusion model in its latent space. I welcome work in this area. 2. The proposed regularization is well-motivated. I find the "spectral autoregression" interpretation of diffusion models to be intuitive and it's nice to see this intuition motivating a technique that seems to work well in practice. The spectral analysis throughout is insightful. 3. The proposed regularization is simple to implement, increasing the likelihood of adoption. I appreciated the discussion of more complex alternatives in the appendix. 4. The comparison with KL regularization is comprehensive. Weaknesses: 1. The work focuses only on fine-tuning pre-trained autoencoders. The regularization could behave differently when learning a model from scratch with additional losses (e.g. adversarial). This limits the takeaways from their work somewhat. 2. The autoencoders are fine-tuned on private internal datasets which harms reproducibility. The authors do verify (with their fine-tuning only ablation) that the dataset shift doesn't contribute to the performance boost. 3. Some of the implementation details are bit under-explained. The authors mention that they incorporate self-conditioning in the DiT, but do not provide precise implementation details. Other Comments Or Suggestions: The DiT training details section in appendix A ends on a trailing sentence. Questions For Authors: 1. How exactly is the self-conditioning implemented? Is it the latent self-conditioning from the RIN paper, or just self-conditioning on the current data prediction? 2. The behavior of the KL regularization seems somewhat non-monotonic (Table 3). I would expect increased regularization to monotonically harm reconstruction, but this does not appear to strictly be the case. For instance, 10e−6 achieves worse reconstruction than 10e-3. Any insight into why this is the case? Did you observe training instabilities? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough feedback. In what follows, we carefully respond to each of the points raised. All comments and suggestions will be fully reflected in the revised manuscript. > From-scratch training Our main motivation of fine-tuning instead of training from scratch is three-fold: Be able to compare against strong established benchmarks; Eliminate the possibility that we might have some “bug” in the training pipeline, which our regularization is rectifying (please, note that *none* of the explored SotA AEs release their training code). Carry more value to the community of improving popular autoencoders. That being said, we launched multiple experiments for from-scratch training for FluxAE and CogVideoAE. Moreover, we opted for using public datasets to have fully self-sufficient experiments. Namely, we trained FluxAE from-scratch on ImageNet for 200K steps and CogVideoAE from-scratch on Kinetics-700 for 60K steps (it’s a heavyweight AE and slower to train). Due to the limited rebuttal period time, we only have the DiT-B/2 results till 300K steps on ImageNet, and till 250K steps on Kinetics. For both datasets, the regularized AEs lead to better LDM convergence. DiT-B/2 for non-regularized vs regularized AEs perform as follows: - For ImageNet: - DinoFID@5k: 569.93 vs 561.4 - DFID@5k: 27.94 vs 28.79 - For Kinetics: - DDinoFID@5k: 652.5 vs 561.21 - DFID@5k: 21.54 vs 19.66 - DFVD@5k: 265.94 vs 379.88 In all cases, our regularization improves the diffusability. We’ll include the full training results (400K) in the revised version of the paper. > ​​Training on public data We thank the reviewer for pointing this out, we include the results on public data together with the from-scratch training experiments in the previous message. > Some training details are under-explained (e.g. self-cond) Our self-conditioning mechanism follows prior work without any modifications (i.e., RIN, FIT, WALT). Namely, during training with a 90% probability, we run an auxiliary forward pass with the DiT model, take its activations from the last block (i.e., right before the “unpatchify” projection), project them with a linear layer and add as residuals to the input tokens after “patchification” of the main training forward pass. For that auxiliary forward pass, following RIN, we use the same noise level $\sigma$ and “no-grad” context (i.e., we do not backpropagate through the auxiliary forward pass) We added the self-conditioning discussion into the “Implementation details” appendix. We would be grateful to the reviewer if they point out any further missing implementation details and we would happily include them into the submission. > The DiT training details section in appendix A ends on a trailing sentence. We thank the reviewer for pointing out that writing mistake. The sentence was intended to convey: “In essence, this reduces the total dataset size, but since we do the same procedure for the entire CogVideoX-AE family, the models are comparable between each other.” We fixed the error. > Explanation of non-monotonic KL influence on reconstruction quality (Table 3). Yes, FluxAE was indeed unstable when fine-tuned with KL regularization for some of the KL β weights: it was stable for β of 0, 1e-7, 1e-4, and 1e-3, but not for 1e-6, 1e-5, 1e-2, and 1e-1. For from-scratch training (which we were doing for Figure 3), it was stable for the entire range (from 0.0 to 0.1), but after some threshold of β <= 1e-4, there was almost no difference in PSNR or FID. Our intuition is that high-capacity autoencoders (like FluxAE) can accommodate high KL penalty for their latents (for from-scratch training, we started noticing degradation only for β >= 1e-3), and their reconstruction quality is governed by other factors, making KL interference less predictable (up to a certain factor). Should the reviewer have further comments, we would be glad to incorporate them fully into our manuscript.
null
null
null
null
null
null
Skip the Equations: Learning Behavior of Personalized Dynamical Systems Directly From Data
Accept (poster)
Summary: The paper tackles the modeling of personalized dynamical systems — that is, dynamical systems whose trajectory evolves conditioned on a set of static parameters, such as the initial condition in ordinary differential equation (ODE) systems, together with other "personal" covariates, such as one patient's weight or age, which affect the dynamics of e.g. drug metabolism in the patient. To do this, the paper extends the very recent Semantic ODE framework of Kacprzyk and van der Schaar (2025), which break down the shape of ODE trajectories into a *composition* (a sequence of *motifs* such as "increasing and convex" or "decreasing and concave") and their *properties* (the specifics of these motifs, such as the locations and heights of local maxima). Semantic ODE is trained to predict the easily interpretable semantic composition and its properties that best describe one-dimensional noisy trajectories, given the initial condition of the system. The authors extend Semantic ODEs in two directions: 1. First, they allow for the modeling of systems of dimension larger than one, by making use of a "channel independent strategy" — that is, by treating each component of the target multidimensional system as independent. 2. Second, they allow the composition map and properties sub-maps to process multidimensional covariates. The authors test their methodology in different synthetic and real world datasets that correspond to very different, personalized dynamical systems, and compare their performance against that of symbolic and neural ODE models. *References*: - Kacprzyk and van der Schaar (2025): No Equations Needed: Learning System Dynamics Without Relying on Closed-Form ODEs, ICLR ## Update After Review The authors' replies to my questions and concerns were satisfactory. Hence, I increased my score. Claims And Evidence: The authors claim that their methodology enables practitioners to: 1. readily integrate prior knowledge, which is done via the pre-selection of the composition library (as in Section 5.2); 2. easily understand the main aspect of the dynamics, through the interpretability of their motif (as in Section 5.3); 3. ensure desired behavior and revise the model when necessary (as in Section 5.4 and 5.5). The authors also claim that their model is flexible enough to be competitive in forecasting tasks. Overall, their results in Sections 5 and 6 align well with these claims. However, **most of these claims also apply to the original Semantic ODE framework on which this work is based**. The proposal extends Semantic ODE to multidimensional systems with multidimensional covariates. **Although their target datasets do feature different covariates, they mostly consists of one-dimensional processes**. Furthermore, the authors never display their target datasets nor their predictions, so one can only rely on the scores in Tables 1 and 2, **which are also not described**. Methods And Evaluation Criteria: The methodology proposed by the authors is interesting. Their results demonstrate that it represents an alternative to ODE modeling which, as the author point out, does not necessarily lead to easily interpretable representations of the dynamics. *Regarding evaluation*, I do not manage to find what metric is reported in Tables 1 and 2. From the introduction one gathers that it must correspond to some divergence between target and predictions up to some horizon time (i.e. a forecasting task). **None of these details are provided** (or at least, I could not find them). Theoretical Claims: There are no major theoretical claims in this work. Experimental Designs Or Analyses: The organization of Section 5 nicely illustrates how the proposal addresses all the claims made in the introduction. Section 6 compares the proposal against well-established ODE-based methods, and Appendix D contains the details about the target datasets and the training details of all baselines. Unfortunately, and as I wrote above, there is little to no information regarding the metrics reported in Table 1 and 2. Supplementary Material: Yes. I especially revisited D, C and A more than once. Relation To Broader Scientific Literature: The authors heavily rely on the work by Kacprzyk and van der Schaar (2025), which introduced Semantic ODE. Indeed, the proposal extends Semantic ODE to multidimensional systems with several static covariates, *which required novel implementation of the composition map and properties sub-maps*. These extensions are important and have great applicability. However, only one target dataset has dimension larger than one. The paper would benefit from experiments in other higher dimensional systems — even if such system will often display periodic or periodic-like behavior, which is hard to model with the proposed methodology, as acknowledged by the authors. Furthermore, there are some details, like the specifics of the trajectory predictor $F_{traj}$, which the authors could include into the Appendix. Otherwise the reader is forced to read Kacprzyk and van der Schaar (2025) to understand the proposal. In my opinion, an ICML paper should be self-contained. Specially given that the work by Kacprzyk and van der Schaar (2025) just came out. Finally, the introduction section borrows many lines from Kacprzyk and van der Schaar (2025). This is to be avoided. Essential References Not Discussed: **I do not know about any fundamental paper missed by the authors**. On the contrary, the authors compared their methodology against different ODE-based models, which are widely used by the dynamical system and machine learning community. What is more, since the proposal's most crucial aspects are its transparency, verifiability, and editability (as written by the authors), I do not think it is necessary to compare their method with the most recent, transformer-based forecasting models that do not rely on continuous-time representations. Other Strengths And Weaknesses: I think the main weakness I find is that the paper could be seen as incremental, for it relies too much on the Semantic ODE framework of Kacprzyk and van der Schaar (2025). Let me summarize the main weaknesses I highlight above: 1. The authors only study one target dataset of dimension larger than one. This is a weakness because one of the main claims of the authors is that they extend Semantic ODE to multidimensional systems. 2. The specifics of the trajectory predictor $F_{traj}$ are missing. The paper shoul be self-contained. 3. The introduction borrows many lines from Kacprzyk and van der Schaar (2025). 4. The authors do not display the target trajectories nor their predictions. The time series forecasting community often includes such plots, because they help readers better understand the performance of the models under study. It would also illustrate the connection between the motifs predicted by the model and the actual data. 5. There is no description of the metrics used to compute the scores in Tables 1 and 2 (or at least, I did not find them). 6. The authors give no details (or I did not find them) regarding the computational cost of their proposal. For example, how long does it take to optimize their models on the different datasets as compared to the baselines? Other Comments Or Suggestions: On the second column, first paragraph of page 4 the authors use both $v \in \mathcal{V}$ and $\boldsymbol{v} \in \mathcal{V}$ to refer to the covariates. Unless I am misunderstanding something, they always correspond to vectors, right? Questions For Authors: 1. How far into the future you predict the dynamics of the systems you study? 2. The strength of the noise corruption is set to 0.01. How does the model respond to different noise corruptions? Is it as robust as Semantic ODE? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer htLB, Thank you so much for such a comprehensive review. We deeply appreciate your time and attention spent on our paper. We are glad that you found our methodology interesting, comparisons comprehensive, and claims justified. We answer the six weaknesses in your summary, then the two questions. Finally, we address your comment about the incremental nature of our work. ### W1. Datasets with more than one dimension We have now added experiments on the viral dynamics model of HIV [1] ($M=3$) in two noise settings, where the features include not only the initial conditions but also four additional parameters. We have also tested our method on the SIR model with more noise. Overall, we have added three additional multidimensional datasets (all results can be seen [here](https://imgur.com/a/5nnGzqi). The main challenge of extending Semantic ODE to multidimensional systems lies in the fact that these trajectories no longer can depend on a single initial condition $x_0\in\mathbb{R}$ but on a multidimensional $\mathbf{x}_0 \in \mathbb{R}^M$ (where $M>1$). Thus, the main contribution is the extension to multidimensional inputs, which we demonstrate in all our experiments. We hope that this explanation and the additional experiments provide sufficient evidence for our claim. **Actions taken**: Added additional experiments on multidimensional datasets in Appendix A. ### W2. Details of $F_{\text{traj}}$ **Actions taken**: Added description of the trajectory predictor in Appendix E as well as details on compositions, motifs, and properties to make the paper self-contained. ### W3. Similar examples in the introduction **Actions taken**: Replaced some of the examples and references in the introduction to limit overlap with the paper on Semantic ODEs. ### W4. Predicted trajectories **Actions taken**: Included plots of sample trajectories predicted by EPISODE in Appendix A. These can be seen [here](https://imgur.com/a/IZHZ6l2). ### W5. Metrics used Please see our response to Reviewer g6tH (Evaluation metric). **Actions taken**: Described the used metric in Section 5. ### W6. Computational cost We have now added a [table](https://imgur.com/a/4jhWn1H) to Appendix D showing the overall computational cost of all approaches (including five runs and hyperparameter tuning). We also show the individual times to train the composition map and the property maps. All experiments were performed on an 18-core Intel Core i9-10980XE with 60GB of RAM. As mentioned in the limitations, training of the composition map is the most time-consuming process as it requires fitting every admissible composition to each sample (this can be parallelized). Note that we perform this preprocessing *once* for all five seeds and then subsample the results based on the split. The actual time to fit the decision tree is negligible. A more detailed breakdown of the time needed to train the composition map is provided in the [following table](https://imgur.com/a/90rECtR). We discuss it further in our response to Reviewer evCm (Q1). **Actions taken:** Added computation times in Appendix D. ### Q1 Time horizon Each dataset has a specific time horizon that is subsequently scaled to $(0,1)$. The details are in Appendix D.1. However, for unbounded motifs, our model can predict for any time $t\geq0$. We evaluate each model on a on a held-out dataset of samples, whose trajectories are observed at the same time points as the ones in the training dataset. ### Q2 Robustness **Actions taken**: Added additional five experiments on datasets with higher noise ($\sigma=0.1$) in Appendix A. EPISODE shows robustness to noise in these settings. The results can be seen [here](https://imgur.com/a/3pjzwQS) ### Minor **Actions taken**: Fixed the typo on page 4 (it should have been $\mathbf{v} \in \mathcal{V}$). ### Incremental contributions Semantic ODE only works for one-dimensional trajectories predicted from one-dimensional inputs. We have extended it to more realistic settings. In particular, we 1. Extended the composition map from interval partition to a decision tree and designed a novel optimization algorithm for fitting it. 2. Extended the property functions from univariate functions to GAMs and designed a novel optimization algorithm for fitting GAMs such that the predicted properties are consistent with the composition. 3. Improved composition map training by using Dynamic Time Warping distance. 4. Accommodated bounded compositions ($t_{\text{end}}\in\mathbb{R}$). 5. Accommodated categorical variables as features. 6. Implemented a visualization tool to visualize the fitted GAMs. **References** 1. Hill, A. L., Rosenbloom, D. I., Nowak, M. A., & Siliciano, R. F. (2018). Insight into treatment of HIV infection from viral dynamics models. Immunological reviews, 285(1), 9-25. --- We hope we have addressed all your concerns. Please let us know if you have any additional questions, we are eager to address them. Kind regards, Authors --- Rebuttal Comment 1.1: Comment: Dear authors, thanks very much for your replies. I have increased my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer htLB, Thank you very much for your reply and for increasing your score. Your comments helped us improve our paper by adding clarifications, experimental details, additional datasets, and noise settings. We hope to have the opportunity to present our work at ICML, where we believe it can advance data-driven modeling of dynamical systems beyond black-box methods and closed-form expressions. Kind regards, Authors
Summary: The paper proposes EPISODE, a framework for learning the behavior of personalized dynamical systems without requiring explicit equation discovery. Instead of the traditional two-step approach of identifying ODEs and then analyzing them, EPISODE directly predicts the semantic representation from data. The paper extends prior direct semantic modeling approaches to accommodate multi-dimensional trajectories and integrate personalization via auxiliary static features. Claims And Evidence: The paper claims to provide an alternative to equation-based modeling by learning behavior directly from data, which is supported by a detailed introduction to the algorithm and experimental results on both synthetic and real-world datasets. Methods And Evaluation Criteria: The method seems reasonable but highly relies on the incorporation of semantic inductive biases. Such incorporation benefits the learning process but, at the same time, depends on prior knowledge about the behavior of the dynamical system, raising questions about its scalability to complex systems. Additionally, the realization is not end-to-end, requiring manual verification and editing, which raises concerns about efficiency. The performance comparison is first mentioned in line 420, but I did not find an introduction to the evaluation metric. Moreover, based on the results shown in Tables 1 and 2, the method does not significantly outperform other baselines. Theoretical Claims: The paper provides a solid conceptual framework, and no obvious issues are found with the theorem. Experimental Designs Or Analyses: The case study on Tacrolimus is well-motivated. The comparison with different model classes is comprehensive. However, the computation efficiency are not clearly discussed. Supplementary Material: The appendices provide additional details about experiments, theory and related work. Relation To Broader Scientific Literature: The work is closely related to recent efforts in physics-informed machine learning and symbolic regression. Essential References Not Discussed: NAN Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer g6tH, Thank you very much for your review. We appreciate your time and effort spent reviewing our paper. We are glad you found our conceptual framework solid, the case study well-motivated, and the comparisons comprehensive. We address your comments below. ### Reliance on inductive biases > The method seems reasonable but highly relies on the incorporation of semantic inductive biases. Thank you for this comment, as it allows us to clarify the experimental setup in our paper. Although our method *can* incorporate semantic inductive biases, it *does not require* this information. The results in Table 2 (EPISODE) are for a generous set of compositions without much prior knowledge. We consider all compositions up to length 4 (or length 8 for the Bike sharing dataset) except for compositions where two consecutive transition points are inflection points. This ends up being, respectively, 26 and 34 compositions. Thus, EPISODE achieves competitive results for a relatively large choice of compositions and minimum prior knowledge. Often, the results are not significantly different for EPISODE*, where we incorporate such knowledge (see Tacrolimus or Bike sharing dataset). Prior knowledge is, however, useful in reducing the training time, satisfying requirements, and improving extrapolation. **Actions taken**: We have now clarified the amount of prior knowledge used in experiments in Section 6. ### End-to-end > Additionally, the realization is not end-to-end, requiring manual verification and editing ... We would like to clarify that, although manual verification and editing are big advantages of our approach, they are *not necessary*. Although the training process consists of two steps (training the composition map and then the property maps), the whole pipeline *is* end-to-end. There is *no manual intervention* in experiments performed in Section 6. **Actions taken**: We have now clarified the absence of manual intervention in experiments in Section 6. ### Evaluation metric Thank you for pointing this out! We have now described the used metric in Sections 5 and 6. The error is given by $$ \mathcal{L} = \frac{1}{D} \sum_{d=1}^D \sqrt{\frac{1}{N_d} \sum_{n=1}^{N_d} \left(\frac{1}{M} ||\mathbf{F}(\mathbf{v}^{(d)})(t_n^{(d)})-\mathbf{y}_n^{(d)}||_2^2\right)} $$ where $\mathbf{F}$ is the predictive model (EPISODE), $D$ is the number of samples, $M$ is the dimensionality of the system, $N_d$ is the number of measurements of the $d^{\text{th}}$ sample $(\mathbf{v}^{(d)}, (t\_n^{(d)}, \mathbf{y}\_n^{(d)})\_{n=1}^{N\_d})$. We choose this metric because for $M=1$ it reduces to the standard mean RMSE, i.e., $\frac{1}{D}\sum_{d=1}^D \sqrt{\frac{1}{N_d} \sum_{n=1}^{N_d} (F(\mathbf{v}^{(d)})(t_n^{(d)})-y_n^{(d)})^2}$ (used in Semantic ODE paper) and we normalize by $M$ so that it is easier to compare results between systems of different dimensionality. The error is calculated on a held-out dataset of samples, whose trajectories are observed at the same time points as the ones in the training dataset. **Actions taken**: Described the used metric in Sections 5 and 6. ### Computational cost We have now added a [table](https://imgur.com/a/4jhWn1H) to Appendix D showing the overall computational cost of all approaches (including five runs and hyperparameter tuning). We also show the individual times to train the composition map and the property maps. All experiments were performed on an 18-core Intel Core i9-10980XE with 60GB of RAM. As mentioned in the limitations, training of the composition map is the most time-consuming process as it requires fitting every admissible composition to each sample (this can be parallelized). Note that we perform this preprocessing *once* for all five seeds and then subsample the results based on the split. A more detailed breakdown of the time needed to train the composition map is provided in the [following table](https://imgur.com/a/90rECtR). We discuss it further in our response to Reviewer evCm (Q1). **Actions taken:** Added computation times in Appendix D. ### Performance We have run additional experiments, including a new dataset with the viral dynamics model of HIV and settings with higher noise. All of the results can be seen [here](https://imgur.com/a/5nnGzqi). EPISODE* achieves nearly perfect performance on the SIR dataset, demonstrating superior noise robustness to SINDy whose constrained search space should have given it an advantage on this simple problem. Both variants of EPISODE show superior performance on the PK and HIV datasets (in both low and high noise settings) compared to both ODE discovery and black-box models. Both variants also achieve superior performance to ODE discovery methods on the two real datasets (Tacrolimus and Bike sharing). --- We hope we have addressed all your concerns. Please let us know if you have any additional questions, we are more than happy to address them. Kind regards, Authors
Summary: The paper proposes a method called EPISODE for learning personalized dynamical systems (PDS) without explicitly discovering ordinary differential equations (ODEs). As mentioned in this paper, traditional approaches to modeling dynamical systems involve first identifying closed-form equations and then analyzing their properties. In contrast, EPISODE bypasses this two-step process by directly predicting a semantic representation of the system’s behavior (including the trajectory’s shape and key quantitative properties) from static inputs or covariates. The method utilizes decision trees to predict the shape of trajectories and generalized additive models (GAMs) to model their quantitative properties. ## update after rebuttal I am quite satisfied with the rebuttal. Claims And Evidence: The paper generally supports its claims with clear evidence. The key claim is that EPISODE offers a transparent, editable, and interpretable alternative to traditional equation discovery or black-box methods. This claim is supported by three aspects: - Demonstrations on synthetic and real datasets (e.g., SIR model, Tacrolimus). - Comparisons against established baselines. - The detailed case study shows the practical benefit of transparency, interpretability, and editability. However, some claims regarding scalability to more complex dynamical systems or handling oscillatory behaviors are acknowledged as limitations. Methods And Evaluation Criteria: In my opinion, the methods and evaluation criteria are very reasonable and relevant. EPISODE is rigorously compared with both closed-form ODE discovery methods (SINDy variants) and black-box approaches (Neural ODEs variants), using metrics such as mean squared error (MSE). The authors also carefully incorporate practical interpretability considerations into their evaluation, demonstrating the method’s practical benefits through real-world medical datasets. Theoretical Claims: Yes. But the paper does not primarily rely on complex theoretical claims. Experimental Designs Or Analyses: Yes. The experimental designs and analyses are sound and clearly structured. Experiments effectively illustrate the method’s strengths in interpretability and performance. The authors also conduct clear ablation studies and robustness checks. Supplementary Material: Yes, the supplementary material was reviewed, particularly the detailed appendices providing additional experimental results (Appendices A.1, A.2), dataset descriptions (Appendix D.1), and additional methodological clarifications (Appendices B, C, and E). Relation To Broader Scientific Literature: The paper positions itself clearly within the broader literature on dynamical system modeling and equation discovery (e.g., references to SINDy, Neural ODEs, ANODE, Semantic ODEs). Essential References Not Discussed: The paper adequately references and discusses existing works central to understanding its contribution. No immediate essential references seem missing. Other Strengths And Weaknesses: Strengths: - Novel combination of direct semantic modeling and GAMs, resulting in transparent and interpretable modeling. Simple but effective. - Practical relevance demonstrated through pharmacokinetic applications. Weaknesses: - Method restricted to finite compositions, limiting application to oscillatory or periodic systems. - The complexity and runtime involved in the composition map training could limit scalability. Other Comments Or Suggestions: The writing and presentation are generally clear. Minor typos (e.g., spacing, punctuation) could be corrected upon careful proofreading. More explicit guidelines on interpreting GAM plots might further benefit readers unfamiliar with GAMs. Questions For Authors: 1. What is the runtime complexity of the composition map training as the dataset size and dimensionality grow? 2. How sensitive is the method to the initial choice of compositions? 3. Could an automatic or semi-automatic strategy for composition selection apply? 4. How to possibly adapt or extend EPISODE to systems with periodic or oscillatory behavior? Ethical Review Concerns: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer evCm, Thank you very much for your positive review! We are glad you found our method novel and our claims well-evidenced. We reply to your questions first, and then we discuss other weaknesses and suggestions. ### Q1. Runtime complexity of the composition map training As mentioned in the limitations, training the composition map is the most time-consuming process as it requires a preprocessing step of fitting every admissible composition to each sample (this can be parallelized). The actual time to fit the decision tree is negligible in our experiments. This preprocessing step has time complexity $O(D M |\mathcal{C}'|)$ where $D$ is the number of samples, $M$ is the dimensionality of the trajectory, and $|\mathcal{C}'|$ is the number of admissible compositions. Crucially, this time does not depend on $K$ (the dimensionality of static features $\mathcal{V}$). We have now included a table showing the time needed to train the composition map for each dataset (see [here](https://imgur.com/a/90rECtR)). We currently use a computationally intensive procedure of minimizing a Dynamic Time Warping (DTW) distance and then the standard MSE error. There is a trade-off between the accuracy of fit and the computation time. Practitioners may opt for more approximate fits (e.g., by not using DTW). The advantage of training the composition map separately from the property map is that the errors table $e^{(d)}[c]$ (Eq. 1) can be computed once, saved, and then reused for different training settings of the composition decision tree and the property maps. This makes introducing changes to the model much quicker. The training time of the property maps (for five seeds and with tuning) can be found [here](https://imgur.com/a/4jhWn1H). **Actions taken**: Added a discussion on the computational complexity in a newly created Appendix F (Additional Discussion) and tables with computation times in Appendix D. ### Q2. Sensitivity to the initial choice of compositions The results in Table 2 (EPISODE) are for a generous set of compositions. We consider all compositions up to length 4 (or length 8 for the Bike sharing dataset) except for compositions where two consecutive transition points are inflection points. This ends up being, respectively, 26 and 34 compositions. Thus, EPISODE achieves competitive results for a relatively large choice of compositions (and minimum prior knowledge). EPISODE* uses prior knowledge about the shape of the compositions, constraining the set to between 1 and 4 compositions depending on the dataset (details in Appendix D.3). We can see that it significantly improves the performance in some settings (Tumor dataset) but often has no significant impact (Bike sharing dataset or Tacrolimus). In these settings, EPISODE is able to autonomously determine which compositions should be used in the composition map. This finding generalizes to the additional experiments that we have run (see [here](https://imgur.com/a/5nnGzqi)). **Actions taken**: Added a discussion on the sensitivity of the method to the choice of the set of admissible compositions in a newly created Appendix F (Additional Discussion). ### Q3. Automatic strategy for composition selection As described above, the algorithm can usually autonomously determine which compositions should be used in the composition map. Thus, a viable approach is to start with a large number of compositions and let the algorithm narrow it down. In the future, we would like to design a fast algorithm that provides an approximate fit between a composition and a trajectory that narrows down the search space before the actual fitting. ### Q4. Periodic or oscillatory behavior EPISODE could, in the future, be extended to periodic or oscillating trajectories by extending the definition of semantic representation to accommodate periodic compositions. Let us assume that we know that the trajectories are described by an infinite composition consisting of repeating $(s_{+-c},s_{--c},s_{-+c},s_{++c})$. Then we can describe this composition segment using properties like "amplitude" or "frequency". For each trajectory in our training dataset we can extract additional trajectories describing how these properties change over time. We can now model these auxiliary trajectories using a framework similar to EPISODE. However, we leave a full exposition of this idea for a future paper. **Actions taken**: Added a discussion on extending EPISODE to infinite compositions in a newly created Appendix F (Additional Discussion). ### W1. Finite compositions Please see our response to Q4. ### W2. Complexity of composition map training Please see our response to Q1. ### Minor **Actions taken**: We have now added explicit guidelines on interpreting GAM plots in Appendix E. --- We hope we have addressed all your concerns. Please let us know if you have any additional questions, we are more than happy to address them. Kind regards, Authors
null
null
null
null
null
null
null
null
Latent Action Learning Requires Supervision in the Presence of Distractors
Accept (poster)
Summary: This paper presents an empirical study of latent action learning in the presence of distractors. They found that latent action learning struggles with distractors, and propose several changes in architecture to improve latent action learning. Notably, they found supervision with a small amount of action labels could significantly improve latent action learning with distractors. Claims And Evidence: I am curious about the claim on "Quantization hinders latent action learning". it seems that this is only verified by linear probing. However, as the author mentioned, linear probing has a major limitation - it can only tell us whether real actions are contained in latent actions or not. Removing quantization, in some sense, is similar to increasing the dimensionality. As a result, I believe it would be better if the author could study this claim with experiments on the BC stage. Methods And Evaluation Criteria: Yes. It follows the experiment setting proposed in LAPO. Theoretical Claims: no theoretical claim Experimental Designs Or Analyses: yes. seems sound to me Supplementary Material: No Relation To Broader Scientific Literature: the claim on line 126~127 is not very accurate. The NN architecture in LAPO is not used in motoGPT, Dynamo, and LAPA. I believe they use different architectures. Essential References Not Discussed: The paper fails to cite the following work on latent action learning that also introduce some improvements, such as using random cropping on data—an approach similar to the "adding augmentation" method discussed in the submission. "IGOR: Image-GOal Representations are the Atomic Control Units for Foundation Models in Embodied AI" Other Strengths And Weaknesses: Overall, I appreciate the paper as an empirical study on latent action learning with distractors. The problem setting is realistic, and the experimental work is solid. One limitation is that the study focuses solely on LAPO without comparing it to other latent action learning methods such as Genie, LAPA, IGOR, and MotoGPT, which utilize varying neural network architectures for the latent action model. However, this appears to be an inherent challenge in latent action research—each work introduces similar yet distinct latent action models tailored for different applications. As a result, direct comparisons between existing approaches are limited, making it difficult to determine which neural network architecture represents the state-of-the-art. Other Comments Or Suggestions: No. Questions For Authors: I'm curious about the results shown in Figure 10. How is the separate decoder trained? Is the distractor constant, or does it display temporal correlations within each trajectory? If that's the case, then even when using only true actions, one might expect that an FDM decoder could learn to predict the distractor. Additionally, I'm not surprised that the LAM model is able to predict distractors, since it is trained by reconstructing the next frame, which naturally extends to predicting distractors as well. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort. We address the questions below. > I am curious about the claim on "Quantization hinders latent action learning". it seems that this is only verified by linear probing. However, as the author mentioned, linear probing has a major limitation - it can only tell us whether real actions are contained in latent actions or not. Removing quantization, in some sense, is similar to increasing the dimensionality. As a result, I believe it would be better if the author could study this claim with experiments on the BC stage. > You are absolutely right that removing quantization has a similar effect to increasing the dimensionality of latent actions, as both changes loosen the information bottleneck, which we believe can be harmful in the presence of distractors (for details see the last response to reviewer **sdKR**). However, we disagree with the conclusion about linear probes. As you rightly point out, probes can only tell us whether real actions are contained in latent actions or not. If probe loss is low, we cannot be sure that latent actions are minimal, only that they contain real actions. To test their true quality, we need to pre-train BC and fine-tune it in a real environment. Thus, low probe loss is a necessary (but not sufficient) condition for good latent actions. This means that if we get a high probe loss, it definitely means that the latent actions do not contain real actions and are therefore useless for subsequent fine-tuning. This is the reason why we did not consider FSQ further in the later stages, as its addition only worsens the probe loss and therefore definitely worsens the final result. However, to be certain, we performed an experiment with LAOM+FSQ. Due to time constraints, we only used walker environment and 3 random seeds (see [Figure](https://ibb.co/VY0djs7T)). As can be seen, FSQ does indeed worsen the downstream performance after fine-tuning. We will include this Figure (and for the remaining environments) in the Appendix. > the claim on line 126~127 is not very accurate. The NN architecture in LAPO is not used in motoGPT, Dynamo, and LAPA. I believe they use different architectures. > You are right, we did not express ourselves clearly. In our case, the details of the architecture itself are not very important (even LAOM itself does not work well without supervision). What is important is that mathematically all these architectures do exactly the same thing as LAPO, and so inherit the same limitations. We will correct this statement in the new version of the paper. > The paper fails to cite the … "IGOR: Image-GOal Representations …" > Thank you for the suggestion, this is indeed a highly relevant paper. We will include the citation. > I'm curious about the results shown in Figure 10. How is the separate decoder trained? Is the distractor constant, or does it display temporal correlations within each trajectory? If that's the case, then even when using only true actions, one might expect that an FDM decoder could learn to predict the distractor. > For the decoder, we used an observation embedding after the ResNet encoder for each method. We trained it to reconstruct the observation and did not pass the gradients through the embedding to avoid changing the main training loop. The distractors are dynamic and change as the episode unfolds (video plays in the background, agent colour changes, camera shakes). You are right that with real actions, FDM from the original LAPO will learn to predict distractors. However, this is a problem of prediction in the pixel space. A method similar to LAOM, which predicts the next observation in latent space, with ground truth actions will provably recover the control endogenous minimal state, filtering out the distractors (see Preliminaries and [Multistep Inverse Is Not All You Need](https://arxiv.org/abs/2403.11940) paper). Why doesn't this happen with LAOM+supervision? Probably because the number of ground truth actions is extremely small in our case.
Summary: Latent actions prove to be useful for efficient policy pretraining from unlabeled videos. This paper aims to enhance the quality of latent actions by removing the original information bottleneck, leveraging multi-step future observations, and predicting future states in the latent space. The authors also suggest that adding a small amount of action supervision can significantly mitigate the effects of distractors. The proposed methods are validated through linear probing accuracy and normalized returns in downstream tasks. ## *Update after rebuttal* I remain unpersuaded by the authors' response. Specifically, while I agree that the modifications bring positive gains in the scenarios created by the authors, I still have the following major concerns: - Due to the prediction loss and the extremely large latent dimension, the use of action supervision does not effectively produce a latent action representation. As revealed by Figure 10 and the responses to reviewers HSFB and sdKR, the latent capacity is so redundant that it passes almost all observation information through the latent. As a result, the policy is more like a video prediction policy rather than a latent action policy. However, the proposed method is not compared or discussed with previous video prediction policies [1,2]. - The conclusions may not be applicable to real-world applications (the authors have also mentioned this in Appendix A). The distractors are created by copying and pasting the background directly, which are quite different from real-world patterns. Additionally, the agents in the studied scenarios are always centered and share a simple appearance, which differ from the settings studied by previous latent action models [3,4]. While it is common to utilize simplified environments for research, the scenarios created by the authors are too unique to verify their applicability to true decision-making scenarios. - Typical decision-making scenarios, such as Open-X-Embodiment and ProcGen, do not require using the proposed method (the authors confirmed in their response to my rebuttal comment). This also greatly compromises the applicablity of the proposed methods to real applications. - It would be beneficial to incorporate convincing visualizations like LAPO [3] to demonstrate the distribution of latent actions. --- [1] Learning Universal Policies via Text-Guided Video Generation [2] Learning to Act from Actionless Videos through Dense Correspondences [3] Learning to Act without Actions [4] Latent Action Pretraining from Videos Claims And Evidence: Yes, there are supported by clear and convincing evidence. Methods And Evaluation Criteria: I have some concerns about how the paper assesses the quality of latent actions. The objective of extracting latent actions is to fully encode action information while minimizing background noise, and the paper utilizes linear probing error to judge the quality of latent actions. However, a lower error does not necessarily equate to higher quality latent actions. Instead, it only indicates that the encoded latent space contains more action information, regardless of the ratio of useful action information. Theoretical Claims: I have checked the theoretical claims in this paper. Experimental Designs Or Analyses: The paper provides extensive experiments with sufficient details. Supplementary Material: I have reviewed all appendices. Relation To Broader Scientific Literature: The paper aims to improve the effectiveness of action pretraining, which could benefit the development of embodied AI. Essential References Not Discussed: As far as I know, all closely related works are cited appropriately. Other Strengths And Weaknesses: W1) **The reliability of the metric.** As I mentioned earlier, the quality of latent actions is influenced not only by the amount of action information encoded but also by the ratio of useful information to noise. Therefore, I believe that linear probing error is not a reliable metric for assessing the quality of latent actions, as it only evaluates how much action information are contained in the latent. W2) **The effectiveness of the method.** Guided by the inappropriate metric, the proposed LAOM removes the information bottlebeck and significantly increase the latent action dimension to 8192. However, according to the downstream performance (blue lines) in Fig.6, these modifications yield little improvement and can even lead to worse performance with 2 and 4 labeled trajectories. W3) **The scope of the proposed setting.** While introducing action labels mitigates the negative effects of distractors, it may be somewhat unfair compared to the motivation of LAPO (Schmidt & Jiang, 2023). LAPO and its related works exclude action labels during the pretraining stage to ensure their algorithms only rely on videos. This assumption has huge potential to exploit Internet-scale and cross-embodiment data, even without a consistent action format. However, the proposed setting is not aligned with this goal. Other Comments Or Suggestions: Please see the questions below. Questions For Authors: Q1) **Procgen results without distractors.** Aside from the environments with distractors, is it possible to compare the proposed techniques with LAPO on the original Procgen benchmark to demonstrate their effectiveness? I believe this would help readers fully understand the properties of the proposed method. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. We have tried to address the concerns below. > I have some concerns about how the paper assesses the quality of latent actions. The objective of extracting latent actions is to fully encode action information while minimizing background noise, and the paper utilizes linear probing error to judge the quality of latent actions. However, a lower error does not necessarily equate to higher quality latent actions. Instead, it only indicates that the encoded latent space contains more action information, regardless of the ratio of useful action information. > This is indeed a valid concern, which we already discuss in detail in the paper (see Section 4, second column, from lines 269-270). As the reviewer correctly points out, linear probing does not tell us that the resulting latent actions are minimal (in the sense that they contain only information relevant to actions without noise), but only allows us to detect the amount of information about real actions in the latent ones. It should be noted that the fact that latent actions contain information about real actions is a **necessary condition** for their usefulness. If the latent actions do not contain any information (as expressed by poor probe loss), this automatically means that they are useless for further BC pretraining. Therefore, before worrying about minimality, it is important to make sure that the latent actions contain real actions at all, which is exactly what we do with our LAOM modifications. With our experiments in Section 4, we show that in the presence of distractors, naive LAPO (especially with quantization) does not produce latent actions that contain sufficient information about real actions, and thus cannot be used for efficient pre-training. LAOM improves this by a factor of eight, ensuring that latent actions contain real actions, which directly translates into a twofold improvement in return. Does LAOM guarantee latent action minimality? It does not, and we state this explicitly in the paper (first column from line 320). On the contrary, without supervision, LAOM still performs poorly (Figure 6), although we can now hope that supervision will allow us to extract action information from latents, which would be impossible with LAPO (which is clearly illustrated by our main experiment in Figure 1). Demonstrating this was one of our main goals. > W1) **The reliability of the metric.** … I believe that linear probing error is not a reliable metric for assessing the quality of latent actions, as it only evaluates how much action information are contained in the latent. > We re-emphasise that linear probes were used only to show that LAPO does not work in the presence of distractors, and that even LAOM does not guarantee minimality. Our main contribution and claim is about the need for supervision, and we explicitly demonstrate this in the second part of the paper with experiments in the real environment and with real return. We believe that such clear improvements in return (see Figure 1) clearly indicate the higher quality of latent actions of LAOM vs. LAPO, and LAOM+supervision vs. LAOM. > W2) **The effectiveness of the method.**  As we stated above (and in responses to other reviewers), linear probing is not our final (and only) metric. It was only used to show that LAPO does not learn good latent actions, and that even after the modifications considered in LAOM - performance did not increase. So the fact that LAOM performs poorly on Figure 6 is not a problem with our approach. On the contrary, this is exactly what we wanted to show in order to highlight the general issues with latent action learning in the presence of distractors. And this is what is greatly improved by the addition of supervision (see Figure 1, 7). > W3) **The scope of the proposed setting.** We have to respectfully disagree with the reviewer's assessment. We're not changing the setting in any way, as LAPO, LAPA and other methods still require real actions, we're just suggesting that they be used differently. On the contrary, it is precisely LAPO and related work that considers a simplified setting that lacks the distractors common to real videos, which in turn is not aligned with the goal of pre-training on Internet-scale data. The main purpose of our work was to show that vanilla LAPO will not scale to Internet videos due to distractors. > Q1) **Procgen results without distractors.** Our aim was not to provide a state of the art method, but to study the properties of latent action learning methods in the presence of distractors, which is absent in the existing literature. LAOM is more of a set of suggestions for practitioners than a stand-alone method. Therefore, we think that the experiments on ProcGen are out of scope, as they do not allow us to study any additional properties of LAPO in the presence of distractors. --- Rebuttal Comment 1.1: Comment: Thank the authors for the detailed rebuttal. I have read all of it, as well as with the comments from other reviewers. Before I update the rating, I still have two major concerns. C1) **Scalability to cross-embodiment web-scale data.** This concern corresponds to W3 in my initial comments. LAPA and its related works can incorporate data from different embodiments to learn a universal policy. However, since the proposed method requires action supervision during training, it raises problems of how to provide supervision when the action formats vary between embodiments and how to ensure the action supervision for one embodiment will not exclude action information for other embodiments (for example, if the objective is to decode the action of the Franka arm, the motion of the WidowX or human could be regarded as distractors, which hurts unified training). The paper studies cross-embodiment training in Section 5, but the four embodiments seem to be always centered and share similar appearances. Compared to previous LAMs, are there any potential limitations if the proposed method is applied to more diverse environments, such as Procgen in LAPO and Open-X-Embodiment in LAPA? If no limitation, could you briefly explain why or provide evidence if applicable? C2) **The actual information predicted by the policy.** As revealed by Figure 10 and the responses to reviewers HSFB and sdKR, the latent capacity is very redundant so that it can pass almost all pixel information through the latent. With the reconstruction loss, the use of action supervision does not actually eliminate distractors. It is possible that the action supervision only makes the output features more suitable for linear probing. If a stronger probing network (instead of a linear layer) is used, the observation embedding could achieve the same MSE loss as the latent actions. As the information for the policy predict is nearly equivalent to original observations, how is the proposed method fundamentally better than predicting observation embeddings, such as UniPi [1] or AVDC [2]? Could you elaborate on this? [1] Learning Universal Policies via Text-Guided Video Generation [2] Learning to Act from Actionless Videos through Dense Correspondences --- Reply to Comment 1.1.1: Comment: > C1) **Scalability to cross-embodiment web-scale data.**  This is a good question, but we don't think it's quite fair to discuss it only in the context of our work. The constraints you refer to are not unique to our method. The proposition "LAOM requires supervision, but LAPO/LAPA does not, and so LAOM is more limited" may be objected, because formally (which can be easily verified by comparing the losses) LAOM, DynaMo, Moto, IGOR, LAPO, LAPA, Genie, GR00T N1 are all the same method, differing only in the details of the underlying architecture, and so they all have the same limitations. Thus, it is not that "the proposed method requires action supervision during training", but rather that "all latent action learning methods using the LAPO objective require supervision **in the presence of distractors**". Our aim was not to propose a state-of-the-art LAM, but to highlight these limitations. As we discussed earlier, LAOM is more of a guide to improving LAPO results in the presence of distractors. However, as we discussed in the paper, without supervision LAM cannot separate noise from control-related features on its own. We show that the reuse of real action labels helps significantly in such a setting. We agree that this is a rather restrictive requirement, but without supervision LAM (including LAPO) methods **will not work at all** in the presence of distractors (we demonstrate this in Figure 1). How can we provide supervision if we have no real action labels and/or have multiple unknown action spaces? This is an open question and a very fruitful direction for future research. For example, for egocentric videos, we could use hand tracking as a proxy action to supervise latent action learning. However, we believe that this is currently beyond the scope of our work. As for Open-X-Embodiment or ProcGen - there is no need to use supervision as these datasets contain almost no distractors. The need arises when we start using e.g. Ego4D or other real world data such as YouTube. We discuss other potential limitations in Appendix A in the current version of the paper. > C2) **The actual information predicted by the policy.** This is a really important and valid concern, which however criticizes the whole direction of latent action pre-training. Is it true that LAM works better than other methods of pre-training at scale? The honest answer is that we don't know, there are no detailed investigations currently. However, it is gaining popularity as part of foundational models such as the GR00T N1 & AgiBot, so we thought it was crucial to highlight the fundamental limitations of this approach. It is quite possible that in the presence of distractors LAM works no better than any other pre-training method. However, it is important to note that without the improvements suggested in our paper it does not work at all (possibly much worse than UniPi & AVDC). As for linear probing, in our final experiments we use MLP with multiple layers as decoder to predict ground-truth actions from BC outputs. With that in mind, we feel that the difference in the final performance reflects how good the latent actions really are. You are right in that even after supervision, there is a lot of redundant information in latent actions, which means there is still a lot of room for improvement in the future. We hope we have addressed most of the reviewer's concerns.
Summary: This paper focuses on the setting of learning latent actions in the presence of background distractions. The authors investigate improving upon prior latent action pretraining work with recent advances in dynamics and latent action modeling. It shows that multi-step inverse dynamics, large latent action dimension without quantization, forward dynamics in latent space, and augmentations help improve latent action quality when distractors are present, and that supervised training with a small amount of ground truth actions can help with latent action quality and close the gap to behavioral cloning performance under the background distraction setting. Claims And Evidence: The paper claims that background distractions requires several changes to current latent action inference methods to improve latent action quality and supports it with experimental results. Methods And Evaluation Criteria: The evaluations overall makes sense. The authors test on extracting latent actions with background distractions, and evaluates the latent action quality with linear probe and downstream policy evaluation. Theoretical Claims: N/A Experimental Designs Or Analyses: Overall the experiments make sense. However, in this paper, there seems to be no bottlenecks (neither VQ nor dimensional bottleneck) in the latent action. One concern is whether the latent action, as evaluated by downstream policy performance and action probe MSE, is not only a high-dimensional projection of the observation embedding. It would be convincing to see an additional baseline where downstream policy training / action probe MSE is evaluated directly on the observation embedding. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper develops on recent advances in latent action pretraining such as LAPO, DynaMo, LAPA, etc., which investigates pretraining latent action policies and visual representations from video data. There is also a rich line of work investigating policy robustness under visual distractions. Extracting latent actions from videos is a longstanding important topic for learning robotic policies, as well as dealing with background distractors. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: the paper investigates extracting latent actions from video data with background distractors, an important setting to consider for generalizing to Internet-scale videos. It shows that with a few modifications, prior work in latent action pretraining such as LAPO can be improved both in the distractor-free and background distraction settings. Weaknesses: see experimental designs and questions. Other Comments Or Suggestions: N/A Questions For Authors: - Clarification: is the BC policy also only trained on up to 128 trajectories? - Clarification: what is the IDM trained on? Is it trained on up to 128 trajectories of ground truth actions, then used to relabel the full dataset for downstream behavioral cloning? - Baseline: how large is the observation embedding dimension? In Figure 8(b), what is the action probe MSE if you probe directly from the observation embedding? - Question: why is it the case that LAOM + supervision achieves a low action probe MSE regardless of latent action dimension, and LAOM without supervision action probe MSE depends on the latent action dimension? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their time, constructive feedback, and suggestions for additional experiments, which we found very valuable. We have tried to answer the questions below. > Clarification: is the BC policy also only trained on up to 128 trajectories? > Yes, the BC baseline we show in Figures 1, 7, 9 uses the same architecture as the BC in LAM methods, but is simply trained from scratch only on trajectories with available ground truth action labels, from 2 to 128. We also use a separate BC for normalization on all figures. We pre-train it on full datasets with all action labels revealed to get the maximum possible score with ground truth actions. With such normalization, we can quantify how much performance we have recovered compared to having access to a fully action-labelled dataset. > Clarification: what is the IDM trained on? Is it trained on up to 128 trajectories of ground truth actions, then used to relabel the full dataset for downstream behavioral cloning? > Yes, this is an accurate description of the overall pipeline. > Baseline: how large is the observation embedding dimension? In Figure 8(b), what is the action probe MSE if you probe directly from the observation embedding? > Thanks for the suggestion! Due to time constraints, we ran these experiments with three random seeds, but only in the walker environment. Given the previous evidence, we are confident that the results will hold in the remaining environments and will include additional results in the camera-ready version of the paper. We took the observation embedding from the LAOM encoder and trained the linear probe to predict real actions, similar to probing from latent actions. We visualize the results in the following figures [[Figure 1](https://ibb.co/wFRGMhDV), [Figure 2](https://ibb.co/gL6L0zNf)]. As can be seen, for LAOM it is indeed the case that probe from observation embedding is better for smaller latent action dimensionality. This can be explained by the fact that the information bottleneck induces the IDM to mainly encode noise in latent actions, as it can better explain the dynamics (deterministic distractors in the background), while observation embedding mostly preserves the information. At higher latent action dimensions, they are expected to equalize, as latent actions without bottleneck can encode the full dynamics, including noise and real actions. This is exactly the effect we described in Section 4, which motivated us to add supervision. However, we see a different picture with LAOM+supervision, where the probe from the embedding observation is generally worse than from the latent actions, because with supervision we can ground the latent actions to focus on features relevant for control even with small dimensions, filtering out the noise. > Question: why is it the case that LAOM + supervision achieves a low action probe MSE regardless of latent action dimension, and LAOM without supervision action probe MSE depends on the latent action dimension? > We believe that in the absence of supervision, as we discuss in Section 4, the information bottleneck is detrimental, as it incentivises the IDM to encode into latent actions a minimum amount of information that is maximally predictive of the next observation. In the case of distractors, this will mostly be noise, as it is easier to predict deterministic videos in the background than actual actions (which also explain much more variation in the overall dynamics). By increasing the latent action dimension, we remove the bottleneck and allow LAOM to encode the full dynamics in actions, including actions but also noise. On the other hand, LAOM+supervision grounds the latent action space to be predictive of actual actions, which can be much smaller because it does not need to explain noise (actual actions are only ~4-16 dimensions).
Summary: - The paper focuses on LAMs, which aim to infer control actions from unlabelled videos - Here the authors note a benefit of reusing action labels from later in the pipeline to help focus (through supervision) latents on control actions - This is most effective in the presence of 'distractors', ie non-control action changes - Empirical results are show on distracting control suite - A broader investigation of various model design choices and extensions are conducted Claims And Evidence: Yes, see strengths weaknesses. Methods And Evaluation Criteria: Yes, see strengths weaknesses. Theoretical Claims: NA Experimental Designs Or Analyses: See strengths weaknesses. Supplementary Material: NA Relation To Broader Scientific Literature: See strengths weaknesses. Essential References Not Discussed: Missing this, which follows a pipeline similar to LAPA. - IGOR: Image-GOal Representations are the Atomic Control Units for Foundation Models in Embodied AI Other Strengths And Weaknesses: Strengths - While the problem setting departs from the standard three-stage LAM setup, in my opinion it remains a realistic use-case, where the same set of labels are being used twice in a smart way - The method is simple - Overall the method works well, the paper in general directly addresses an issue many LAM users may be interested in - Good related work description - Thorough investigation of various aspects of LAMs going beyond the main message provides breadth to the paper (eg fig 8, 9, 10, 12) - Reasonable eval metrics, normalizing by BC with all labels was a nice touch - Reasonable baselines, nice to see improvements over the IDM baseline - Interesting intuition about differences between LAM and IDM approaches (generalization) Weaknesses - One major difference with previous work is the decision to open up the bottleneck -- both removing the quantization and moving to a large 8192 dims. This is originally justified by fig 4 and 5, however these only measure MSE -- naturally a wider bottleneck will allow more information about both distractors and actions into the latent, so measuring MSE on actions will reduce. But is having extra distractor information not harmful? Fig 8c partially suggests not. - In general I'm surprised with this capacity the model doesn't just pipe the entire next observation through the latent -- one possible reason I thought is that the architectures are not powerful enough to unpack all details through the bottleneck, see q about architectures. - Overall this is not a critical point as it seems to be necessary to get the method to work anyway. - Some of the architectural choices -- eg latent reconstruction and augmentation -- are becoming pretty standard in other LAM works, so there is limited novelty there, though the paper doesn't overclaim on this aspect anway. - I'd be interested to dive deeper into the differences between LAM and IDM -- fig 8a suggests the gap closes at some point. But this is probably better left to future work. Questions - Could you detail the architecture for IDM FDM and pre-trained latent policy? Also BC baseline. Are these CNNs? In appendix I only saw 'encoder num res blocks' - Augmentation details -- is there any structure in how you do the random augs? Like do inputs in a sample receive the same crop or? - Would love to see a zoomed in plot of fig 8 b as can't tell how the MSE changes for the LAOM+supervision. - For fig 10, what representation is used for the IDM? The final predicted action? - In fig 8 c, if it seems like the models keep improving with latent dim -- why stop at 8192? Other Comments Or Suggestions: NA Questions For Authors: See strengths weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and constructive feedback. We address the main questions below. > Missing this, which follows a pipeline similar to LAPA: IGOR… > We will include the citation, thank you for your suggestion. > … naturally a wider bottleneck will allow more information about both distractors and actions into the latent …. But is having extra distractor information not harmful? > As we explained in more detail in Section 4, the removal of quantization and the increase in dimensionality is a necessity. Without these changes, latent actions will have no information about real actions at all, and as a consequence will be useless for further pre-training, as demonstrated by the LAPO performance. The best we can hope for in the general case (without supervision) is to encode all dynamics, including noise, but most importantly real actions, into latent actions. That's what the changes in LAOM do. LAOM also does not guarantee that latent actions will be minimal. However, now that we are sure that the real actions are contained in the latent ones, we can hope that with a little supervision we can extract them in a generalizable way. We believe that the results in Figures 1, 7 and 8c clearly show that this is indeed the case. > I'm surprised with this capacity the model doesn't just pipe the entire next observation through the latent > This is an interesting question that we did not explore in depth as we felt it was beyond the scope of the study. However, we did not observe any evidence of shortcut learning. It is important to note that we used encoders that were not very small for a given task. Given that we are working with 64x64 images, they have enough capacity to predict the next state in pixel space very accurately. As we show in the appendix, the encoders take up about 200M parameters in total. > I'd be interested to dive deeper into the differences between LAM and IDM > We believe that IDM will perform better than LAM in the limit (e.g. see Figure 9 in [GR00T N1](https://arxiv.org/abs/2503.14734)), but it's also limited to a single action space. However, when the number of labels is very limited, LAM will perform better due to better generalization. Overall, we believe that LAM+supervision combines the best of both worlds with better use of existing action labels. > Could you detail the architecture for IDM FDM and pre-trained latent policy? Also BC baseline. Are these CNNs? In appendix I only saw 'encoder num res blocks' > We provide some details of the architectures in Appendix D. We use the same visual encoder architecture for all methods (IDM, FMD, BC), which is a simple ResNet borrowed from the open source LAPO code. For FDM we use an identical architecture, swapping conv downsampling with transposed conv upsampling. LAPO, similar to the original code, uses separate encoders in IDM and FDM, while LAOM shares one encoder between them. For LAOM latent FDM we use multiple MLP blocks, inspired by MLP from transformer architecture. To process successive observations, we concatenate images across channels. BC uses ResNet + small action head. The action decoder is a two-layer MLP with a hidden dim of 256. > Augmentation details > We use several types of augmentations: shift, rotate, change perspective, and combinations such as shift-rotate, rotate-perspective, etc. (as we note in the paper, they are taken from Almuzairee et al., 2024). We sample augmentations for each sample in a batch, but share them across the dimension of the frame stack. > Would love to see a zoomed in plot of fig 8 b as can't tell how the MSE changes for the LAOM+supervision. > Sorry, here's an enlarged [Figure](https://ibb.co/5X6B5KdH). We will add it to the appendix. Overall, the MSE does not change that much here. > For fig 10, what representation is used for the IDM? > The IDM can be schematically described as $a = h(f(s_t), f(s_{t+1}))$, where $f$ is a ResNet encoder and $h$ is an action head consisting of MLP. For visualization we used observation embedding after the ResNet encoder, that is $f(s_t)$. > In fig 8 c, if it seems like the models keep improving with latent dim -- why stop at 8192? > Mostly for practical reasons, since increasing the dimensionality of latent actions increases the overall computational requirements for pre-training. For example, we had to significantly increase the size of the BC in order for it to learn latent actions of dimension 8192 accurately enough (real actions have dimensionality of about 4-16). In fact, to perform with similar quality on real actions, the BC might have been about 4-8 times smaller, because predicting 4 numbers is much easier (LAPA report similar results). To ensure fairness we used larger BC size in all experiments. Our main goal was to show a trend, so we did not see the need to go further. --- Rebuttal Comment 1.1: Comment: Thank you for this response. For now I maintain that it is a strong paper and is worth presenting to ICML attendees -- will continue discussion with reveiwers as needed.
null
null
null
null
null
null
On Measuring Long-Range Interactions in Graph Neural Networks
Accept (poster)
Summary: The Long Range Graph Benchmark (LRGB) is a widely adopted tool for evaluating the long-range capabilities of frameworks in long-range graph tasks. This paper identifies its limitations and introduces a formal range measure for operators on graphs, encompassing both node-level and graph-level tasks. Furthermore, in light of the current simplicity of synthetic tasks (which predominantly rely on long-range interactions) ,this paper propose a redesign of graph synthesis tasks. Experimental results demonstrate that the method proposed in this paper provides a better understanding of long-range task in graph machine learning. Claims And Evidence: The claims in the material are supported by clear and compelling evidence. Methods And Evaluation Criteria: The graph long-range evaluation metric proposed in this paper has been validated on both synthetic and real-world datasets, demonstrating its practical significance. Theoretical Claims: This paper provides clear definitions and analyses of evaluation metrics at the graph level and node level, the scope of their influence, and relevant examples. The corresponding theoretical claims are largely accurate. Experimental Designs Or Analyses: The experimental design is conducted on both synthetic and real-world datasets, and the validity of the proposed framework is demonstrated through theoretical analysis and foundational experiments, making the findings convincing. Supplementary Material: The supplementary materials provide additional details on relevant research, as well as implementation specifics and experimental results. Relation To Broader Scientific Literature: This paper primarily proposes improvements to the previous graph long-range evaluation tool, LRGB, addressing its limitations. It represents a progressive advancement in the field. Essential References Not Discussed: This paper primarily introduces enhancements to the previous graph long-range evaluation tool, LRGB, addressing its limitations. It constitutes a progressive advancement in the field. Other Strengths And Weaknesses: Strengths: (1) The paper is written clearly, and the problem is described effectively. (2) The theoretical foundation is well-established, and the theoretical and experimental validations are comprehensive and convincing. Weaknesses: (1) This paper primarily proposes a new method for evaluating long-range dependencies in graphs. However, there are concerns regarding the true extent and scope of its impact. Other Comments Or Suggestions: ## Update for Rebuttal I confirm that I have read the author response, I will keep my score Questions For Authors: refer to weakness Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback and recognition of the practical significance and credibility of our work. We address the reviewer’s concerns below. We also note that, following suggestions from other reviewers, we have performed additional experiments for GCN and GT on Cora (as a known short-range dataset to compare with LRGB), virtual node and activation function ablations for our synthetic task corresponding to Fig. 4, a VN and pure Transformer ablation for LRGB, and experiments on the heterophilous tasks Roman-Empire and Amazon-Ratings for a GCN and GT. All additional experiments will be included in the final paper. We discuss these results in the relevant reviewer responses, and would be grateful if the reviewer would consider reading these responses and examining the figures, which are compiled in a pdf at the following link: https://fuchsia-lina-52.tiiny.site/ >*“This paper primarily introduces enhancements to the previous graph long-range evaluation tool, LRGB, addressing its limitations…there are concerns regarding the true extent and scope of its impact”* We thank the reviewer for their feedback. While assessment of LRGB is indeed one of the aims of our work, we would like to politely emphasise that our contributions are far more general and go significantly beyond testing the appropriateness of an existing benchmark. Firstly, our measure can be applied to any model trained on any task; it not only enhances the usefulness of LRGB in assessing range but *any graph learning benchmark* — for example, see our additional experiments, in which we assess the range of Cora as well as two heterophilous tasks. Secondly, LRGB is purely an empirical evaluation method based on performance gap. Our range measure provides a theoretically robust range measurement tool for *any model trained on any task* that can be considered in addition to (or in the absence of) performance gap. We believe this is a significant theoretical contribution that paves the way for further analysis in this direction. Thirdly, while we use and motivate our range measure only for graph ML tasks, as defined it is a general measure applicable to any operator (including any neural network), and for any data structure for which one can define a distance metric. As a result our method can be applied beyond graphs to the general ML setting, as has been done with other works such as [1]. One example which we leave for future work is a theory-driven evaluation of long-context issues observed for Transformers operating on sequence data, such as is observed with the ‘lost in the middle’ problem [2]. --- We appreciate the reviewer’s thoughtful feedback and hope that our clarifications have effectively conveyed the broader impact and significance of our work. We are, of course, happy to address any further concerns if the reviewer wishes to elaborate on them. Given the reviewer’s positive comments about our work, our above clarifications, and the significant additional experiments we have provided to strengthen the final version, we would be grateful if the reviewer would consider raising their score. [1] Barbero, Federico, et al. "Transformers need glasses! information over-squashing in language tasks." Advances in Neural Information Processing Systems 37 (2024): 98111-98142. https://arxiv.org/abs/2406.04267 [2] Liu, Nelson F., et al. "Lost in the middle: How language models use long contexts." Transactions of the Association for Computational Linguistics 12 (2024): 157-173. https://arxiv.org/abs/2307.03172
Summary: The paper works in the area of long-range dependencies for graph neural networks. Its main contribution is to define a new metric to measure the range of a task. This metric can also be applied to GNNs that are trained on a task, approximating the true range of the task. Experiments on the LRGB datasets indicate that PASCAL-VOC is indeed a long-range task while the peptides tasks are not. Claims And Evidence: Almost all claims are thoroughly supported by evidence. Only the claim that high values in GPS indicate higher long-rangedness of the problem is not properly validated as it could very well be that GPS is always long-ranged (even if it does not need to) while MPNNs are only long-ranged if needed. While I generally agree with the claim, I believe that GPS behaves oddly in the experiments and should thus be checked further. Methods And Evaluation Criteria: Yes, the proposed methods and evaluations make a lot of sense. It would still be nice to compare to some (known) short-range datasets to see the difference more pronounced. Theoretical Claims: The proofs are extremely straightforward and easy to read, and even included in the main paper. I find this very refreshing and very positive. Experimental Designs Or Analyses: I did not run the code myself. The experimental design mostly makes sense, except that I would like a few additional experiments to strengthen the claims about the LRGB datasets. It would also be nice to see whether Figure 4 would look the same for networks that do use activation functions and maybe even virtual nodes. Supplementary Material: I read the supplement and am mostly happy with it (as I said before, I was hoping for a few additional experiments). Relation To Broader Scientific Literature: The new range metric is novel and a great addition to the arsenal that we have to evaluate what a GNN actually does as well as to evaluate what a graph task actually requires. Essential References Not Discussed: I would have expected to find Gilmer et al as the citation for virtual nodes and not two 2024 papers. The paper is cited, but in an unexpected context (for chemistry, not for virtual nodes). Other Strengths And Weaknesses: The paper is really well-written and easy to read! It is also thoroughly proofread which I greatly appreciate. Other Comments Or Suggestions: p1c2: I would have expected virtual nodes to be discussed here as well as it is an effective way to enable long-range interactions. p2c2: I would have liked a bit more intuition of when the range should be large and when it should be small. And I found the example given there not super helpful as it is quite technical without having an idea what we actually want to achieve, but that might be personal. p3c2: It would be nice to state more explicitly that the "Range of a GNN" is the effective range of what the trained GNN computes (and not something that only depends on the architecture) p3c2, above eq1: I believe that the connection between Jacobian and best linear approximation is not universally known. (At least I had to stop there for a moment while reading). Since you later mention Taylor anyway, it might be a good idea to mention it here as well since its the first-order Taylor term. p4c1: It would be nice to mention in which context the influence distribution has been defined (as it is not another way of defining range) p5c1 k-Power: is this with or without self-loops (I would guess without, but it would be nice to be explicit) Figure 6/7: Since the range is not that large, it might be nice to write the values as 0.3, 0.6, 1, 10 etc instead of the scientific notation which is a lot slower to parse. p4c1 L187: reason -> reasons p4c2 L209: Erdős (with that weird o) and L213 increase->increases p5c1 L250: Figure 2 -> Figure 3. Also Figure 3 has left/right which should be mentioned in the paragraph. Figure 3 caption: _a_ linear increase ... and _a_ sublinear increase Questions For Authors: My main questions are about the experiments and resulting claims which are already in the review above, but I try to concisely put them here too: Q1) Does Figure 4 work similarly for GCN with activation functions and possibly VN? Q2) what happens to fig 4/5 for GPS? Will that behave oddly? Q3) Is there a way to exclude the alternative claim "GPS is always more long-range, even if it is not needed" which could also explain the figures. A possible way to check that would be using just a transformer without message passing and/or to run everything on one or two datasets that are known to be short-range (e.g. Cora). Effectively: while I agree with the claim, I think using GPS here does not help without additional experiments for e.g. virtual nodes and pure transformers, as well as "baseline" results of GPS on known short-range tasks for calibration. Would it be possible to add those experiments to make the claim stronger? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback, and for appreciating the novelty of our contribution and its significance as an evaluation tool for GNNs and graph tasks. We address their concerns below. As suggested, we have performed additional experiments for GCN and GT on Cora (as a known short-range task to compare with LRGB), as well as virtual node and activation function ablations for our synthetic task corresponding to Fig. 4, and a VN and pure Transformer ablation for LRGB. We also performed additional experiments, as suggested by other reviewers, for the heterophilous tasks Roman-Empire and Amazon-Ratings for a GCN and GT. All additional experiments will be included in the final paper. We discuss these results in the relevant reviewer responses; the figures are compiled in a pdf here: https://fuchsia-lina-52.tiiny.site/ >*“Q1) Does Figure 4 work similarly for GCN with activation functions and possibly VN?”* We have extended the synthetic experiment to produce the equivalent of Fig. 4 for both a GCN with GeLU activation, and a GCN with a VN. We report the results in the linked pdf, Figs. A8-9. We notice that adding a nonlinearity does not considerably change the results. Adding a VN, on the other hand, does: where Fig. 4 showed models with an initial negative range bias before converging during training towards the true range from *below*, the addition of a VN appears to induce a positive bias, so models converge to the true range from above — though they do still converge, i.e. the trained model range still approximates the true task range. >*“Q2) What happens to fig 4/5 for GPS?”* We performed this experiment and found that the GPS architecture is not well-aligned with this synthetic task: it trains irregularly and attains poor performance. As a result, there is no clear interpretation of the range of the model on this task. This points to the importance of inferring conclusions about tasks by an examination of both the range and performance across different models (see further discussion below). An analysis of the relationship between performance and range of different Transformer architectures on different tasks is an interesting direction which we leave for future work. >*“Q3) Is there a way to exclude the alternative claim 'GPS is always more long-range, even if it is not needed' which could also explain the figures”* We agree with the reviewer that high range scores for GPS are not sufficient, by themselves, to indicate that a task is long-range. This is clear from Figs. 6-7 in the paper, from which we conclude that the Peptides tasks are not long-range while VOCSuperpixels (in relative terms) is, despite very similar, high range scores for GPS. We base our claim about the long-rangedness of VOC on two factors: (i) that the *MPNNs*, rather than GPS, have higher relative range (as the reviewer points out, MPNNs are long-range *if needed*, suggesting that for VOC, longer range is necessary); (ii) that the range gap between GPS and MPNNs is accompanied by a performance gap, indicating that models with higher range are better suited to the task. Neither of these is the case for Peptides. To summarise, it may be that GPS will have a high range for a task when it is not required, but by comparing *relative range* as well as *relative performance over multiple models*, we can make an assessment about the underlying task. To address the second part of the comment, while we do find that GPS and other GTs are biased towards higher range, and often are long-range when it is not needed, they can learn to be short range. See Fig. A4, in which we plot range against validation loss for the first 25 epochs of training a GCN and GT on Cora, a known short-range task. We see that the GT grows rapidly long-range in early epochs, but quickly learns to be short-range until it achieves its maximum validation accuracy at epoch 25 (after which it overfits). This excludes the alternative claim. We follow the reviewer’s suggestions for additional experiments to better validate our claims. In addition to Cora, we perform experiments for a GT (GPS without MPNN component) and GCN with a virtual node for the three LRGB tasks; see Figs. A5-7. The results support our findings: GT performs very similarly to GPS, and +VN increases range, with a corresponding increase in performance for VOC but not for Peptides, consistent with our conclusions about each task’s range. >*“k-Power: is this with or without self-loops?”* Without; we will clarify this in the final version. --- We thank the reviewer again for their positive and useful feedback, as well as for their other detailed suggestions, which will be added for the final version. We are happy to address any further concerns. Given the reviewer’s comments about the novelty and utility of our work, and the significant additional experiments we have provided to strengthen the final version, we would be grateful if the reviewer would consider raising their score. --- Rebuttal Comment 1.1: Comment: Thanks a lot for the extensive and helpful answers and the additional experiments which (from my perspective) further strengthen the already strong paper. I have thus adjusted my score. --- Reply to Comment 1.1.1: Comment: We are grateful for the reviewer's appreciation of our responses and additional experiments, and thank them for raising their score. We also thank them again for their time, insightful feedback, and engagement with our work.
Summary: This paper introduces a formal measure for evaluating long-range interactions in Graph Neural Networks (GNNs), addressing the limitations of existing empirical benchmarks like LRGB, which lack theoretical grounding. The proposed measure quantifies a model’s ability to capture long-range dependencies, validated through synthetic experiments and applied to assess common benchmarks. This work provides a principled framework for studying and improving long-range interactions in GNNs. Claims And Evidence: - It is not clear why Jacobian is used for node-level task while Hessian is used for graph-level task. Could you please clarify? Methods And Evaluation Criteria: The authors claim that the range of a trained model that solves a task approximates the range of the underlying task. However, "solving a task" is not clearly defined. In Section 6.2, where experiments are conducted on real-world datasets, the authors evaluate four models, but none achieve perfect performance. This raises an important question: Is the model failing to capture long-range interactions, or does the task itself not require them? I suggest the authors refine their claim to be more rigorous and explicitly clarify the criteria for determining when a task is truly "solved." Theoretical Claims: I did not check the proofs comprehensively. Experimental Designs Or Analyses: Heterophilous datasets are known to benefit from long-range interactions. It would be valuable to empirically analyze the range of heterophilous tasks to better understand their dependence on long-range dependencies. I suggest the authors examine datasets such as Roman-Empire and Amazon-Ratings [1] to provide deeper insights into how their proposed measure applies to heterophilous settings. [1] Platonov, O., Kuznedelev, D., Diskin, M., Babenko, A., & Prokhorenkova, L. (2023). A critical look at the evaluation of GNNs under heterophily: Are we really making progress?. arXiv preprint arXiv:2302.11640. Supplementary Material: Yes, I have looked at the supplementary material. Relation To Broader Scientific Literature: The proposed method is based on influence scores and adapts them to measure the influence from nodes at different ranges. I suggest the authors discuss how this concept aligns with related work in the broader literature, such as [1], to provide readers with a more comprehensive understanding of existing approaches and how their method fits within this context. [1] Koh, P. W., & Liang, P. (2017, July). Understanding black-box predictions via influence functions. In International conference on machine learning (pp. 1885-1894). PMLR. Essential References Not Discussed: N.A Other Strengths And Weaknesses: The adaptation of influence scores to define a range measure is an interesting approach. Additionally, the findings on LRGB, particularly that existing methods may not be effectively learning from long-range interactions is interesting. Other Comments Or Suggestions: ## update after rebuttal I have read the rebuttal and would like to keep my score Questions For Authors: Why use Jacobian for node-level task and Hessian for graph-level task? Can you explain in more details for better clarity? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback and for appreciating our contributions. We address their concerns below. As suggested, we have performed additional experiments for heterophilous tasks, as well as on Cora, virtual node and activation function ablations for our Fig. 4 synthetic task, and a VN and GT ablation for LRGB. All additional experiments will be included in the final paper. We discuss these results in the relevant reviewer responses; the figures are included in a pdf here: https://fuchsia-lina-52.tiiny.site/ >*“It is not clear why Jacobian is used for node-level task while Hessian is used for graph-level task.”* Both our node- and graph-level range measures require a term that captures pairwise interactions between nodes. For node-level tasks, the $(i,j)$-th element of the Jacobian represents the sensitivity of node $i$’s output features to node $j$’s input features. For graph-level tasks with a single output, the Jacobian is a vector and so does not contain pairwise node information. The Hessian, on the other hand, is an $N\times N$ matrix which does encode this information. Specifically, its elements denote the sensitivity of the output to each pair of input node features. In other words, for node-level tasks we use the first-order Taylor approximation, and for graph-level, since the Jacobian is unsuitable, we use the second-order, the minimal-order approximation to obtain any pairwise information between nodes. >*“'...solving a task' is not clearly defined … on real-world datasets, the authors evaluate four models, but none achieve perfect performance.”* We thank the reviewer for raising this interesting point, which we will clarify in the final version. We informally define a task as ‘solved’ when validation loss of a model approaches zero; Fig. 4 demonstrates this empirically, showing that as loss decreases, the model range approaches the known range of a synthetic task. Theoretical results linking the accuracy of the estimated range with validation error are an important direction which we reserve for future work. The reviewer is correct that, in our real-world experiments, models do not achieve perfect performance, meaning that the resulting range measures are only approximations of the true range of the underlying task. In the case of real-world tasks, the range is not just an approximation of the task range, but is also the true range of a model trained on that task. It is not our intention that a single range score be used to draw conclusions about a task. Instead, we infer conclusions about tasks by an examination of both the range and performance across different models: if range and performance correlate positively, it suggests a more long-range task, whereas no correlation suggests a task may be less long-range. In this way, our measure can serve as a practical tool for evaluating tasks for which no existing models can achieve perfect performance (i.e. all non-trivial tasks). >*“It would be valuable to empirically analyze the range of heterophilous tasks…”* We thank the reviewer for this suggestion. While we agree that it is worth investigating the relationship between label heterophily and long-range dependency, we are not aware of work explicitly making this connection. That said, we follow the reviewer’s suggestion and report the range during training for Roman-Empire and Amazon-Ratings in Figs. A1-2 in the linked pdf, taking hyper-parameterizations from [1]. We find that GCN and GT (which perform local convolutional and attentional message passing respectively) converge at a range of ~1 hop or lower for both tasks, but GT performs better, suggesting that range is a less important factor for these tasks than how information is propagated through the graph. This is further supported by the lack of correlation observed between range and performance between GCN and GCN+VN, especially for Roman-Empire. >*“The proposed method is based on influence scores… I suggest the authors discuss how this concept aligns with related work in the broader literature”* The suggested reference is indeed an important prior work. [2] introduced the notion of influence function to study the importance of certain training points on a model’s prediction, from which [3] adapted influence functions to the context of graph learning to study and improve information propagation in GNNs. A discussion of the paper along these lines will be included in the final version. We thank the reviewer for this suggestion, and will make sure to appropriately connect our work to the relevant broader literature. --- We thank the reviewer again for their feedback. We are happy to address any further concerns, and would be grateful if the reviewer would consider raising their score in light of our responses and additional experiments. [1] Platonov et al. (2023) https://arxiv.org/abs/2302.11640 [2] Koh et al. (2017) https://arxiv.org/abs/1703.04730 [3] Xu et al. (2018) https://arxiv.org/abs/1806.03536
null
null
null
null
null
null
null
null
Overcoming Multi-step Complexity in Multimodal Theory-of-Mind Reasoning: A Scalable Bayesian Planner
Accept (spotlight poster)
Summary: This paper primarily aims to tackle multimodal ToM, though in practice, their main focus is still on complex multi-step reasoning tasks, with the multimodal aspect being somewhat secondary in their method. They propose the "Weak-to-Strong Control" strategy, which modifies the probability distribution at the output layer of a small model to directly adjust the corresponding probability distribution of a large model. This allows them to enhance ToM reasoning capabilities without fine-tuning the large model, thereby reducing computational costs. The experimental results support their claim. Claims And Evidence: 1. The authors claim that their "Weak-to-Strong Control" strategy enhances ToM reasoning while reducing computational costs, but their experiments only compare 405B + 8B control to an untrained 405B model, not to a task-trained 405B. This means they only prove performance improves under cost constraints, but not that their method is the best way to enhance ToM reasoning if cost were not a concern. 2. The authors demonstrate that their method achieves better generalization by outperforming direct inference in unseen environments such as outer space, ancient Egypt, fairy tales, the Wild West, and medieval Europe. However, these tasks remain structurally similar, primarily involving object-location-based reasoning. It remains unclear whether their approach would generalize equally well to other types of ToM tasks, such as inferring social relationships or tracking psychological shifts in negotiations, where reasoning dynamics differ significantly. Methods And Evaluation Criteria: The proposed method is actually more similar to adapter tuning and LoRA-style approaches, which aim to achieve better performance by fine-tuning fewer parameters rather than performing full model fine-tuning. A more reasonable baseline for comparison should be these parameter-efficient fine-tuning methods. The paper itself feels somewhat inconclusive, as I don’t see a strong connection between the proposed method and multimodality ToM. Theoretical Claims: Theorem 1 is mathematically valid and justifies that Weak-to-Strong Control allows the large model to approximate a post-trained model without fine-tuning, ensuring minimal divergence through controlled adjustments. Experimental Designs Or Analyses: The experiments show that 405B + 8B control improves performance compared to an untrained 405B model, but without comparing to a fine-tuned 405B or LoRA/adapter tuning baselines, it's unclear if this is the best approach for improving ToM reasoning. Also, an ablation study is needed to confirm whether the improvement comes from Weak-to-Strong Control itself or just adding a small model. Supplementary Material: No supplementary material provided Relation To Broader Scientific Literature: The paper builds on ToM reasoning and efficient fine-tuning methods by having a small model adjust a large model’s outputs instead of fine-tuning, but without comparing to LoRA, adapter tuning, or fine-tuned large models, it’s hard to tell if this is actually the best approach, and the multimodal angle feels more like an add-on than a core focus. Essential References Not Discussed: No Other Strengths And Weaknesses: The paper is well-structured. Other Comments Or Suggestions: The generalization tests focus only on object-location-based tasks, it would be more convincing if some other scenario such as social interactions or psychological reasoning can be added. But this might be hard, so if not able to, it is fine. Questions For Authors: As I discussed above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We sincerely appreciate Reviewer Dtd9’s valuable comments and suggestions.** --- **Q1:** *Comparison with fine-tuned large LMs (e.g., 70B/405B)* **A1:** **Table D: https://anonymous.4open.science/r/response_tom-BD87/tableDEFGH.md** Directly fine-tuning a 405B model is practically infeasible for most institutions due to extreme GPU requirements (~50–64 Nvidia H100 GPUs). Therefore, we approximated this scenario using the more manageable Llama-3.1 70B model. In our paper, Tables 2 & 3 have demonstrated that the fully post-trained 70B LM performs worse than the proposed method (8B+70B, 4B-depth+70B, and 4B-width+70B). This suggests that our method not only reduces resource usage but also achieves superior generalization, aligning with recent literature showing generalization degradation after fine-tuning large LMs [1-3]. Results (Table D) on Llama-3.3 70B (widely considered equivalent to 3.1-405B by the community) further support this conclusion. (See also our detailed discussion in Q1-A1 to Reviewer DnoB). --- **Q2:** *Comparison with parameter-efficient fine-tuning (PEFT) such as adapters or LoRA.* **A2:** **Table E: https://anonymous.4open.science/r/response_tom-BD87/tableDEFGH.md** The proposed weak-to-strong control is fully orthogonal and complementary to PEFT techniques, i.e., we can combine our method with any PEFT technique. In fact, our small LMs are trained by LoRA, as described in L191-right. Table E further confirms our method's consistent effectiveness regardless of the PEFT choice for small LM. Directly applying PEFT to large pretrained LMs performs worse than our method (Table E). As discussed above, our method avoids fine-tuning large LMs and thus preserves their pretrained mental/world knowledge [1-3], essential for generalization in multimodal ToM tasks. --- **Q3:** *If other scenarios such as social interactions or psychological reasoning can be added* **A3:** **Table A (same as Table F): https://anonymous.4open.science/r/response_tom-BD87/tableAB.md** We sincerely appreciate your recognition of the difficulty of testing nuanced social generalization. Inspired by your feedback, we expanded our evaluation using MuMA-ToM [4], a benchmark explicitly designed for multi-agent social interaction tasks involving belief inference, social goal inference, and belief-of-goal inference (Table A). Our model achieves competitive results to LIMP [4], the expensive GPT-4o-based SoTA method, demonstrating clear generalization across social relationship tasks. (See also our detailed discussion in Q2-A2 to Reviewer dLAb). --- **Q4:** *Connection / how the method addresses multimodal ToM challenges.* **A4:** Multimodal ToM tasks uniquely require integrating implicit world knowledge and dynamic temporal mental-state reasoning, which significantly differ from standard STEM reasoning tasks. Our method addresses these challenges explicitly through two key innovations: 1. Leveraging the vast implicit knowledge encoded in scaled-up pretrained LMs (e.g., 405B parameters), crucial for understanding complex social interactions and environmental contexts. 2. Structuring dynamic belief inference explicitly using Bayesian Inverse Planning (BIP), a cognitive-science-based framework designed specifically for ToM. This combination strategically bridges the gap between abstract multimodal contexts and nuanced mental-state reasoning, effectively resolving the multimodal complexity identified in our Figure 1. --- **Q5:** *Ablations clearly isolating Weak-to-Strong Control's contribution.* **A5:** **Tables G & H: https://anonymous.4open.science/r/response_tom-BD87/tableDEFGH.md** To clarify its impact, we demonstrate two ablation studies: - We compare with naïvely post-trained models. Results summarized in Table G (i.e., Tables 2, 3 & D) show consistent performance drops without weak-to-strong control. This demonstrates clearly that Weak-to-Strong guidance enhances the large LM’s ability to generalize by effectively leveraging its pretrained world and mental-state knowledge. - In Table H, when directly combining a small LM and a large LM (directly adding their logits) without our structured weak-to-strong adjustment, the performance is lower than our method. Thus, naïvely adding a small LM is suboptimal. Explicit weak-to-strong control is essential for abstracting the smaller LM’s specialized ToM behavior to leverage the large LM’s pretrained knowledge. Overall, these ablations conclusively validate that our mechanism is both critical and independently responsible for our method’s demonstrated role in ToM grounding. --- **References** - [1] Overtrained Language Models Are Harder to Fine-Tune, arXiv:2503.19206 - [2] Understanding Catastrophic Forgetting in Language Models via Implicit Inference, ICLR 2024 - [3] Spurious Forgetting in Continual Learning of Language Models, ICLR 2025 - [4] MuMA-ToM: Multi-modal Multi-Agent Theory of Mind, AAAI-25 (Oral) --- --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response! I realise now that I had misunderstood a few things when I first read the paper. The additional explanations and experiments helped clear up my main concerns. I especially appreciate the added results — social interaction and psychological reasoning are particularly challenging, and I had initially assumed they were being avoided on purpose. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer Dtd9 for your thoughtful reading, detailed consideration, and updated feedback. Your insightful comments have helped us clarify and strengthen our work—especially regarding challenging scenarios. We greatly appreciate your time and effort!
Summary: This paper addresses the scalability limitation of Theory-of-mind (ToM) models in multi-modal environments. Predicting agents' goals and beliefs in complex mutli-modal environments involving vision and language requires visual understanding, multiple steps planning and reasoning, as well as extensive world knowledge. While LLMs in the order of hundreds of billions of parameters posses such capabilities, fine-tuning them to ToM tasks is expensive. To address this, the paper presents a weak-to-strong guidance framework where small LMs (in the order of 4-8B parameters) are fine-tuned on ToM tasks and guide a large LM to perform such tasks without further fine-tuning. This is achieved by modifying the LLM's predictions by the difference in predictions between the base small model and the same model after it was fine-tuned on ToM tasks. The authors conduct several experiments that show the efficacy of their approach on a plethora of models. Claims And Evidence: The paper is very well written and the claims are supported by evidence. Methods And Evaluation Criteria: The evaluation methods make sense. Theoretical Claims: The proof of Theorem 1 seems correct to me. Experimental Designs Or Analyses: The experimental design is sound. Supplementary Material: I read the proof of Theorem 1 and it seems correct. Relation To Broader Scientific Literature: The paper addresses an important issue in the area of ToM modeling and provides a sound solution. Essential References Not Discussed: Prior work is properly discussed, Other Strengths And Weaknesses: Nothin to add. Other Comments Or Suggestions: Nothing to add. Questions For Authors: Nothing to add. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer RMFT for the positive and encouraging assessment, and for clearly recognizing the unique complexity in multimodal ToM we aim to address. The core philosophy of our weak-to-strong guidance BIP framework is precisely to leverage specialized small language models to efficiently guide large pretrained models, thus preserving extensive implicit world knowledge essential for multimodal ToM. Your confirmation of our theoretical rigor, experimental soundness, and alignment with the broader literature greatly encourages us!
Summary: This paper proposes a scalable Bayesian Planner that employs small models for stepwise Bayesian updates, refining the likelihood estimation of larger models. Experimental results demonstrate that this approach outperforms existing methods on multimodal Theory of Mind (ToM) benchmarks and generalizes well to unseen scenarios. Claims And Evidence: N/A Methods And Evaluation Criteria: N/A Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: The paper introduces a novel hierarchical modeling approach where specialized smaller models assist larger models, effectively balancing robustness and generalization while reducing the complexity of multistep training. The control mechanism in Equation (7) is particularly well-designed, allowing the language model to focus on differences between the base model and the post-trained model. The experimental design is strong, including evaluations of models' performance over extended planning steps, tests on both text-only and multimodal benchmarks, and assessments in unseen scenarios. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: I have some clarification questions: - How does post-training affect generalization in unseen scenarios? In Table 4, the 4B-depth and 8B+70B models outperform the 70B post-trained model. Does this suggest that post-training limits the generalization of the 70B model, or that the smaller post-trained model enhances the generalization of larger models? - What is the notation g1/g2 in Equation (2)? - What are the results on text-only benchmarks? Given the framework’s design, it should theoretically generalize well to text-based ToM tasks. Providing these results would strengthen the claim of generalizability. - What is your model’s performance on the scenarios depicted in Figure 1? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We sincerely thank Reviewer DnoB for the insightful comments and valuable suggestions.** --- **Q1:** *Does direct post-training limit the generalization of the large LM, or does guidance from smaller post-trained LMs enhance generalization?* **A1:** Thank you for raising this insightful question. Our experiments (Tables 2 & 3) suggest that directly post-training the large LM (70B) achieves suboptimal generalization. Specifically, the directly post-trained 70B LM performs worse than the proposed method (8B+70B, 4B-depth+70B, and 4B-width+70B). This indicates that direct post-training on large LMs may inadvertently diminish their inherent generalization capabilities, likely due to partial overwriting of implicit world knowledge and mental-state reasoning abilities acquired during extensive pretraining. This interpretation aligns closely with recent literature on generalization degradation and catastrophic forgetting in extensively fine-tuned large LMs [1,2,3]. In contrast, our proposed weak-to-strong control addresses this by post-training only small LMs (4B or 8B), which subsequently guide the larger LM exclusively at inference time without modifying its pretrained weights. Thus, these smaller models function as specialized lightweight controllers that effectively enhance ToM reasoning without compromising the broader pretrained capabilities. This strategy enables our framework to effectively balance task-specific adaptation with robust generalization, yielding better performance on previously unseen scenarios. We will include this discussion in our revised paper. **References:** - [1] “Overtrained Language Models Are Harder to Fine-Tune,” arXiv:2503.19206. - [2] “Understanding Catastrophic Forgetting in Language Models via Implicit Inference,” ICLR 2024. - [3] “Spurious Forgetting in Continual Learning of Language Models,” ICLR 2025. --- **Q2:** *$g_1/g_2$ in Eq (2)* **A2:** In Equation (2), $g_1$ and $g_2$ represent two candidate **goal hypotheses** within our Bayesian inverse planning framework. Each hypothesis $H_i = \langle g_i, b_i^t \rangle$ comprises a goal $g_i$ and a corresponding belief $b_i^t$, jointly providing potential explanations for the observed agent behavior. In our experimental setup (Sections 3 and 4), these hypotheses directly correspond to the provided multiple-choice answer options (e.g., options (a) and (b)). Equation (2) thus computes and compares the posterior likelihoods of these candidate hypotheses given observed states and actions. We will explicitly clarify this linkage near Equation (2) in the revised manuscript. --- **Q3:** *Results on text-only benchmarks to strengthen the claim of generalizability?* **A3:** **Table C: https://anonymous.4open.science/r/response_tom-BD87/tableC.md** Following your valuable suggestion, in Table C, we conducted additional experiments specifically evaluating our method on text-only Theory-of-Mind tasks (the same MMToM-QA benchmark but without visual or multimodal inputs). We are pleased to provide these new results and will include them in our revised paper. In these text-only evaluations, despite lacking the fine-grained temporal state transitions provided by multimodal observations, our model maintains robust performance. As our smaller LM was post-trained on activity data derived from the multimodal ToM simulator, it effectively leverages familiarity with underlying scenarios to accurately infer high-level ToM states from text alone, thereby strongly supporting our claim of generalization. --- **Q4:** *Performance on scenarios in Figure 1* **A4:** Following your recommendation, we conducted additional evaluations of our weak-to-strong control framework specifically on scenarios depicted in the updated Figure 1 ([link](https://anonymous.4open.science/r/response_tom-BD87/accuracy_vs_steps_update.png)). Our model (8B+405B) consistently outperforms the 405B with CoT on tasks ranging from 1-step to 7-step planning. For tasks exceeding 8 planning steps, performance between methods converges. This convergence occurs because these highly complex multi-step tasks fall beyond the distribution of the post-training dataset. As a result, successful reasoning at higher complexities increasingly depends upon the intrinsic pretrained grounding capabilities of the large LM. We will include this discussion and the updated Figure 1 in our revised paper.
Summary: The paper presents a Bayesian ToM method using stepwise belief updates and weak-to-strong LM transfer, unifying social and world knowledge to achieve 4.6% higher accuracy on multimodal tasks (including unseen settings) than prior approaches, resolving scalability/generalization trade-offs. ## update after rebuttal The authors have addressed most of my concerns. Although minor issues remain, the paper is satisfactory overall, and I lean toward an accept recommendation. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I verified the proof on decomposing ToM reasoning into stepwise Bayesian updates and the one on transferring ToM reasoning from smaller to larger language models. Overall, the proofs are sound, though some high-dimensional scalability assumptions need further clarification. Experimental Designs Or Analyses: We reviewed the experimental design on multimodal benchmarks and the analysis reporting a 4.6% improvement. Overall, the setup is sound, but more details on data partitioning and confounding factors would be beneficial. Supplementary Material: I reviewed the entire supplementary material. Relation To Broader Scientific Literature: This work bridges Bayesian cognitive modeling (via modular belief updates) and large language models (LLMs) to overcome scalability and generalization issues in traditional Theory-of-Mind methods. Its novel weak-to-strong knowledge transfer integrates social reasoning with real-world knowledge in LLMs, achieving state-of-the-art accuracy in complex, unseen multimodal scenarios. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: + The Bayesian ToM planner decomposes complex theory-of-mind reasoning into stepwise Bayesian updates. This design enables effective scaling across different model sizes (from 7B to 405B parameters), overcoming the typical scalability issues found in previous methods. + The approach leverages a “weak-to-strong” control strategy by using smaller language models to refine ToM-specific likelihood estimates. These estimates are then integrated into larger models, which combines specialized reasoning with broader social and world knowledge. + The method demonstrates a 4.6% accuracy gain over state-of-the-art techniques on multimodal ToM benchmarks, including in unseen scenarios, suggesting tangible benefits in real-world applications. Weaknesses: - Although the method is scalable, combining multiple models (smaller ones for likelihood estimation and larger ones for integration) may lead to increased computational complexity. This might limit its practicality for applications with strict real-time or resource constraints. - Although experimental results are promising, it remains uncertain how well the approach will perform across the diverse and nuanced landscape of real-world social interactions. - Why Not Use Chain-of-Thought Models Like Deepseek or GPT O3. Other Comments Or Suggestions: N/A Questions For Authors: 1. Could you elaborate on why chain-of-thought models like Deepseek or GPT O3 were not experimented with in your approach, given their recent prominence? 2. What are the practical computational limitations of your Bayesian ToM planner in real-world, resource-constrained scenarios, and how might these affect its scalability? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We sincerely thank Reviewer dLAb for their insightful comments and support.** --- **Q1:** Practicality under strict real-time/resource constraints? **A1:** Our method uses a small post-trained LM (4B/8B) to dynamically guide the large pretrained LM (70B/405B) at inference. Practically, both models comfortably fit on individual NVIDIA H100 GPUs (80GB, BF16 precision) and run in parallel with minimal synchronization (small likelihood tensors). For example, for the 8B+70B model, its inference for 600 tasks takes ~14–15.5 min (1.4–1.55s per question), nearly identical to a single unguided 70B model, as the extra computational overhead of the 8B model is almost negligible compared to the 70B model. Furthermore, the large LM only performs likelihood estimation (**prefilling** ~1024 tokens), typically completed in ≤1 second per GPU [NVIDIA refs 1, 2]; correspondingly, the small LM requires only ~0.5 seconds per question as an individual. Such prefilling tasks are highly amenable to acceleration tools (e.g., NVIDIA Dynamo, vLLM), making our Bayesian ToM planner practically suitable even under resource constraints. Additionally, our method avoids the costly fine-tuning of large LMs. For example, directly fine-tuning a 405B model is practically infeasible for most institutions due to extreme GPU requirements (~50–64 Nvidia H100 GPUs). In contrast, our method can fine-tune an 8B model to guide a 405B model, requiring only a single H100 GPU and achieving superior performance. We will include this discussion in our revised paper. --- **Q2:** Generalization to diverse, nuanced real-world social interactions? **A2:** **Table A: https://anonymous.4open.science/r/response_tom-BD87/tableAB.md** We evaluated generalization to diverse, complex unseen scenarios (Tables 4, 9 &10), covering Andersen fairy tales, Ancient Egypt, Outer Space, Wild West, and Medieval Castle. Results consistently show stable, robust generalization. Additionally, inspired by your feedback, we expanded our evaluation using MuMA-ToM [3], a benchmark explicitly designed for nuanced social interaction, including: - **Belief inference:** Understanding environmental dynamics. - **Social Goal inference:** Interpreting subtle social objectives. - **Belief-of-Goal inference:** Attributing complex mental states. Results in Table A show that our method performs competitively to the state-of-the-art GPT-4o-based LIMP [3] and outperforms all the other baselines. Note that this is achieved by using open-source models, avoiding the expensive GPT-4o API cost required by LIMP [3]. Our weak-to-strong control robustly leverages large pretrained LMs, effectively adapting to real-world social reasoning without compromising generalization. We will include these new results in our revised paper. --- **Q3:** Why were prominent CoT models (Deepseek, GPT O3) not included? **A3:** **Table B: https://anonymous.4open.science/r/response_tom-BD87/tableAB.md** Thank you for highlighting CoT models. We initially did not include Deepseek R1 (671B) or GPT O3-mini (released in Jan 2025), due to timing close to the submission due (also in Jan 2025). Additionally, our multimodal ToM tasks prioritize implicit world knowledge and nuanced mental-state reasoning, which differ fundamentally from CoT models' strength in explicit logical reasoning. Following your valuable suggestion, we conducted new evaluations in Table B, which clearly show the performance ranking: **Our method > Deepseek R1 > GPT O3-mini**. This demonstrates that the core challenge in multimodal ToM tasks is the depth and breadth of implicit mental-state and world knowledge—areas where large pretrained representations excel over logic-specialized CoT models. Further, our cognitive-inspired BIP framework effectively mitigates the overthinking/hallucination pitfalls observed in specialized logical reasoning models. We will include these new results in our revised paper. --- **Q4:** (minor) The setup is sound, but more details on data partitioning and confounding factors would be beneficial. **A4:** We provide some detailed dataset construction, partitioning strategies, and confound mitigation in Sec. 4.1 and App. D. Specifically, following MMToM-QA's setting, the training set is derived from 1,000 procedurally-generated videos annotated with structured sequences (states, goals, beliefs, actions). The test set uses 600 questions derived from other 134 videos, which are entirely disjointed environments and narratives. To enhance clarity, we will add more details to App. D, as well as explicitly reference App. D and the accompanying repository readme file in the revised main paper. --- **References:** - [1] NVIDIA MLPerf AI Benchmarks. “Llama 2 70B: MLPerf Benchmark.” - [2] NVIDIA Technical Blog. “Boost Llama 3.3 70B Throughput 3x with TensorRT.” - [3] MuMA-ToM: Multi-modal Multi-Agent Theory of Mind, AAAI-25 (Oral).
null
null
null
null
null
null
Score-Based Diffusion Policy Compatible with Reinforcement Learning via Optimal Transport
Accept (poster)
Summary: The method proposed in this paper is quite complex, but I will do my best to summarize it: * The authors attempt to integrate diffusion policies (which are typically learned via imitation on expert data) with online environment interactions. * To this end, the authors make the following key observation (Prop 4.1): minimizing the optimal transport problem from expert state distribution -> expert action distribution, with the cost being the negative of the expert Q function, yields a policy of equivalent performance to the expert. * So our algorithm is roughly repeats the following steps: a) Learn the optimal transport problem solution (as dual functions, although we can recover the primal), given the current learned Q-function and data from the replay buffer. b) For each inference step, sample a bunch of state-conditioned actions from the diffusion policy. Weight the probabilities state-action pairs by the coupling function H(s, a) from the OT problem solution; intuitively, we are doing weighted sampling based on how "likely" it is that an optimal policy following the current Q-function would take each action. Select an action according to the reweighted probabilities and execute in the environment. c) After doing a rollout, update the Q-function based on the observed rewards. d) Update the diffusion policy using equation (10); this is similar to advantage-weighted regression, where we are upweighting the state-action pairs where H(s, a) is high. * Basically, if my understanding is correct, we solve the OT problem transform a Q function into this coupling object H(s, a), which tells us how "good" a state-action pair is. We then use H to both weight (s,a) pairs in training the diffusion policy and guide sampling at inference time. * None of the above strictly requires expert demonstrations. But if we do have expert demonstrations, these come in the OT solving step via a masking function which gives keypoints to guide the OT solution. * The authors show convincing performance of this method across a range of problems. Claims And Evidence: Claim: the OTPR policy outperforms competing baselines. - Evidence: strong. Across all experiments in Figure 2, OTPR is either clearly best or tied. It is also clearly the most consistent--no other baseline performs close to OTPR for all scenarios. The performance improvement compared to other demo-augmented RL algorithms in Talbe 1 is quite substantial. Claim: the guidance from coupling function H (for training and sampling at inference) is better than that from the Q function or the advantage function. - Evidence: moderate to strong. The authors compare these for a single robomimic-can task (Figure 3 left), where the difference is substantial, but I'd like to see this for other environments as well. Claim: OTPR without the expert-demonstration masking "exhibits instability and reduced efficiency" (420). - Evidence: weak to moderate. The distinction between the masked and unmasked versions of OTPR in Figure 3 right are marginal, and again only for a single environment. I don't see what suggests that the unmasked version "exhibits instability". Methods And Evaluation Criteria: Yes, and they are thorough. Theoretical Claims: I did have a look at the proof of Proposition B.1 and it seems reasonable, although I have some confusions about the problem setup which means I wasn't able to confidently check it (see below). Experimental Designs Or Analyses: The experiments are reasonable. I like that the hyperparameters for the OTPR are consistent across all the tasks in Table 2 (Appendix C.1), meaning that the method doesn't require extensive hyperparameter tuning. Supplementary Material: I did not check the code. Relation To Broader Scientific Literature: This paper is addressing a really important topic in robotics right now: how can we improve diffusion policies with online data? This is a challenging problem for diffusion policies in particular. Typically, in online RL you need to take the gradient of the action likelihood with respect to the policy parameters; but for diffusion policies, there is no closed-form action likelihood as actions produced via an iterative sampling scheme. A natural idea is to weight the regression targets in the diffusion loss by the Q function or the advantage. I'm mentally understanding the submission as proposing a better version of this using an OT coupling function H(s, a) instead. The experiments are pretty compelling, even though I don't fully understand the derivation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: My main reservation is that the paper is pretty hard to understand. There's a lot going on, the math exposition is pretty confusing, and I think there are quite a few typos. Here are some of my confusions: 1. In line 166 RHS, the authors say that the policy "moves mass from the state distribution $\mu(s)$ to the distribution of actions $\nu(a)$." What are these state and action distributions? Are they the stationary distributions under the expert policy? Effectively, the marginals distributions of the (s,a) pairs from the expert dataset? 2. In section 4.2, the authors denote the "condition data" of an action as $s_{cond}(a)$. This makes very little sense to me, as the same action can be taken in multiple different states, so there isn't one unique state that produces a particular action. 3. I think in the RHS of 13 the i index should appear in the numberator, instead of k? 4. The authors say that the input to Algorithm 1 is an "initailized Q-network"-- what does this mean? Is it pretrained? 5. In (22), should it read $\pi(s) \in \text{Supp}(\nu)$? 6. In the proof of Prop B.1, the relevant result from Kakade & Langford should be reproduced to make everything self-contained. 7. When introducing the Regularized OT Dual, it should be explicitly stated what kind of objects u and v are. I think it would be also nice if the derivation of the dual could be provided in the appendix because it's not obvious to me how that happened. Other Comments Or Suggestions: N/A Questions For Authors: 1. What you show is that H(s, a) is effectively just a "better" version of the Q function (Figure 3 left) for guiding and training diffusion policies. It seems that H(s, a) comes from three main ingredients: the Q function, the marginal distribution over states, and the marginal distribution over actions. What, intuitively, do these last two ingredients "add in" to make H better than Q? It seems in algorithm 2 that you are just learning H from the data in the replay buffer, which is collected from a suboptimal policy (the one currently being trained). Why would that help? 2. In figure 4, why isn't optimal transport assigning a unique action for every state? Am I right that this is the relaxed scheme (eq (2)) instead of the Monge formulation (eq (1))? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer an3h, Thank you for your detailed review and constructive feedback on our submission. Below, we address your main concerns and questions. ## C1: Claims And Evidence: **R1:** We appreciate the feedback regarding evidence granularity. To substantiate our claims, we have conducted additional experiments on the Robomimic-Square environment (see Figure 4 in https://anonymous.4open.science/r/OTPR_Supplementary-5F55/README.md). The observed "instability" manifests as OTPR-U producing worse fine-tuning outcomes more frequently across multiple trials, which is visually reflected in the increased variance of success rates. We agree the original terminology could be misinterpreted and will revise "instability" to "statistically significant variance increase". ## C2: Weaknesses **R2:** We acknowledge that the confusing math exposition are weaknesses. We'll work to improve the clarity of the mathematical derivations and the overall exposition. Below, we address your main confusion: **1.** Here, we assume the existence of an stationary optimal behavior policy, $\pi^\beta$ in standard MDP. In this context, the "state distribution" is formally defined as the idealized stationary distribution induced by the expert policy. This theoretical abstraction serves to formulate the imitation learning problem as a Monge optimal transport (OT) problem between distributions. We will clarify it. However, we explicitly recognize that: The true stationary state/action distribution is intractable to compute directly in practice, especially for complex environments. Real-world expert datasets only provide finite samples from this distribution. So we introduce a neural network-based approximation in 4.3 to estimate the underlying OT plan on expert dataset. **2.** The reviewer is correct; we did not assume a one-to-one mapping between state and action here. If the term "condition data" has caused confusion, we are willing to revise it to "all conditional states." **3.** Thank you for the reminder. Indeed, there was an error on our part, and we will correct it. **4.** The "initialized Q-network" refers to a newly instantiated Q-network within the deep RL that has not undergone any training iterations. To enhance algorithmic transparency, we propose adding an explicit initialization declaration step in the pseudocode. **5.** The reviewer is correct. Thank you for the reminder and we will correct it. **6.** We directly employed **Lemma 6.1** from (Kakade & Langford, 2002). As suggested by the reviewers, we will explicitly clarify the equivalence between the advantage function $A$ and $R$ to enhance the readability of the derivation. **7.** Thanks to the reviewer for the suggestion. In order to facilitate a clear understanding of Regularized OT Dual for the reader, we will additionally provide an introduction to Kantorovich duality and provide a proof in the appendix, the core idea is rewrite the constrained infimum problem as an **inf** **sup** **problem**, and exchange the two operations by formally applying a minimax principle,i.e. replacing an “inf sup” by a “sup inf”. ## C3: Question 1 For Authors *H(s, a) as a Better Version of the Q Function?* **R3:** First, we would like to clarify the reviewer's understanding. As described in Equation 15, **the compatibility function $H$ incorporates two estimated dual variables $u$ and $v$, rather than the distributions $\mu$ and $\nu$**. We sincerely apologize for the unintended ambiguity caused by the visual similarity between English and Greek letters in our notation. In the revised manuscript, we will adopt more distinctive notation to better differentiate these variables. Secondly, imitation learning aims to acquire a deterministic state-action mapping from expert demonstrations, while reinforcement learning focuses on learning a Q-function to evaluate action sampling, even for suboptimal actions. Our approach bridges these two objectives by introducing a compatibility function $H$ that establishes a **soft** state-action coupling relationship from a data distribution perspective. Specifically, the proposed compatibility function offers two advantages: (1) For clearly advantageous state-action pairs (e.g., those from expert demonstrations), $H$ provides precise guidance; (2) For novel state-action pairs that may emerge during RL exploration (particularly those absent from the training data), where Q-value estimates could be unreliable, the potential functions derived from the optimal transport plan serve as a corrective mechanism to adjust these estimates. ## C4: Question 2 For Authors *In figure 4, why isn't optimal transport assigning a unique action for every state? Am I right that this is the relaxed scheme (eq (2)) instead of the Monge formulation (eq (1))?* **R4:** The reviewer is correct. As indicated in the title of Figure 4, we indeed visualize the Optimal Transport Plan in this figure. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their thorough response. A few comments on my remaining concerns below: *The reviewer is correct; we did not assume a one-to-one mapping between state and action here. If the term "condition data" has caused confusion, we are willing to revise it to "all conditional states."* The formulation of Proposition 4.2 doesn't make sense if $s_{cond}$ is now a set-valued map. What does $\mathcal{C}(s, a)$ become? Can the authors describe in detail where $s_{cond}$ comes from in practice? *For novel state-action pairs that may emerge during RL exploration (particularly those absent from the training data), where Q-value estimates could be unreliable, the potential functions derived from the optimal transport plan serve as a corrective mechanism to adjust these estimates.* Can you elaborate on this? Why are the potential functions not also unreliable, since you're using the Q function as the optimal transport loss? How do the potential functions act "as a corrective mechanism"? --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you very much for your insightful comments and constructive feedback. We regret that our previous rebuttal may not have fully addressed your concerns due to word limitations. Please find our detailed response to each of your points: ## C1. What does $\mathcal{C}(s,a)$ become? Can the authors describe in detail where $s_{cond}(a)$ comes from in practice. **R1:** 1. The reviewer’s observation is entirely valid. When $s_{cond}(a)$ is a set-valued map (i.e., an action $a$ may correspond to multiple states $s$), the original definition of the Dirac delta function $\delta(s-s_{cond}(a))$ introduces a mathematical inconsistency, as $s_{cond}(a)$ becomes a set rather than a single state. To resolve this, the **Generalized Dirac Measure** $\delta_{s_{cond}(a)}(s)$ can be defined as an integral measure over sets, i.e., $\delta(s) = 0$, if $s \notin s_{cond}(a)$. 2. Our goal is to achieve smooth RL fine-tuning for diffusion policy (DP) from imitation learning (IL). So we introduce Proposition 4.2 to reformulate the IL objective $J_{DSM}$ of DP into $J_{CDSM}$, which establishs a mapping from states to actions through data-driven learning. Thus, the context here is the setting of imitation learning, and $s_{cond}(a)$ originates from the paired expert data of $(s, a)$. The actual code implements this by constructing a hash map. We greatly appreciate the reviewer's comments. We will make the necessary corrections and add relevant explanations in the manuscript. This will make our theory more standardized and clear. ## C2. For novel state-action pairs ... Can you elaborate on this? Why are the potential functions not also unreliable, since you're using the Q function as the optimal transport loss? How do the potential functions act "as a corrective mechanism"? **R2:** Thank you for the insightful question. Our method shares conceptual similarities with offline RL approaches like **Weighted Regression** (e.g. AWR and RWR)and **Selection from Behavior Candidates** (e.g. SfBC and IDQL). Both our method and prior works (e.g., IDQL and SfBC) involve: **(1) Using Q-learning to assign scores to state-action pairs from the behavior policy.** **(2) Training a diffusion policy via forward KL minimization.** Obviously, a naive approach would **directly resample actions using Q-values** as weights (as in SfBC), but this risks over-reliance on Q-values for OOD pairs, where they may be unreliable or falsely high. ***Corrective Role of Potential Functions:*** Instead of relying solely on Q-values, we use the optimal transport (OT) plan to derive weights. Specifically: The OT plan’s dual potentials (estimated from dataset and replay buffer) decouple the dependency on Q(s,a) for novel pairs by separately reweighting states (s) and actions (a) based on their marginal distributions. For in-distribution (s,a): Q-values are relatively accurate, so the OT-derived potentials (and the resulting composite cost H(s,a)) align with Q-learning. For OOD (s,a): The potentials act as a conservative regularizer by leveraging the **global** structure of the dataset (via state/action marginals) rather than trusting local Q-extrapolations. ***Why Potentials Are More Reliable Than Q-Learning Alone:*** The potentials are not trained to maximize returns (unlike Q-functions) but to approximate the data distribution’s geometry (via OT’s marginal constraints). While the OT loss uses Q-values as a cost, the potentials are smoothed over the dataset—avoiding overfitting to spurious Q-peaks. This is analogous to how OT-based imputation handles noisy inputs by enforcing mass conservation. The MASK mechanism further ensures that expert data retain their original pairs. In essence, the OT plan provides a conservative reweighting that balances Q’s local accuracy with global distributional fidelity.
Summary: This paper introduces OTPR, a novel method that integrates optimal transport theory with diffusion policies to enhance the robustness and performance of imitation learning models through online interactions with the environment . The core algorithmic idea involves leveraging the Q-function as a transport cost and viewing the policy as an optimal transport map to establish a connection between optimal transport and RL . OTPR also introduces masked optimal transport to guide state-action matching using expert data as keypoints and a compatibility-based resampling strategy to improve training stability. The paper's main findings from experiments on three simulation tasks demonstrate that OTPR consistently matches or outperforms existing state-of-the-art methods, especially in complex and sparse-reward scenarios, highlighting its effectiveness in combining imitation learning and reinforcement learning for versatile and reliable policy learning. ## update after rebuttal I confirm my score. Authors addressed comments and added clarity and results to the original submission. Claims And Evidence: The paper’s claims that OTPR (1) integrates optimal transport with diffusion policies to stabilize fine-tuning, (2) achieves notable performance gains over baseline methods, and (3) remains robust in sparse-reward environments—are generally well-supported. Multiple experiments on robotic tasks with varying difficulty, along with comparisons to several recent diffusion-based and demo-augmented RL baselines, underscore OTPR’s improvements. The authors also include ablations (e.g., masked vs. unmasked OT) to illustrate how each component contributes to the final performance. Methods And Evaluation Criteria: The paper takes Robomimic, Franka-kitchen and CALVIN as the evaluation benchmark, which are suitable for assessing multi-step tasks under distribution shifts, making the evaluation well-matched to the paper’s goals. Theoretical Claims: There are no apparent flaws in the theoretical arguments as presented. Experimental Designs Or Analyses: The paper’s ablation studies that remove “masked OT” or the compatibility-based resampling strategy appear valid for isolating each component’s effect on performance. The comparisons to both diffusion-based and non-diffusion-based RL methods also lend credibility to the authors’ main performance claims. However, finer details like runtime comparisons are treated less thoroughly, so it’s hard to assess how sensitive or resource-intensive the method might be in broader settings. Overall, the key experiments are logically consistent, and no major flaws are evident in their design or analysis. Supplementary Material: The paper has no supplementary material. Relation To Broader Scientific Literature: OTPR unifies insights from generative modeling, reinforcement learning, and OT to tackle distribution mismatch more effectively. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths:** - OTPR is the first method that combines diffusion based policy, RL and OT. - Propose Mask Optimal Transport in RL finetuning. **Weakness:** - Still need to test in Real-world environment like DPPO. Because performance achieved in a simulated environment may not directly translate to equivalent real-world performance. Other Comments Or Suggestions: Typos in Section 3.2: “Reinfrocement Learning” should be “Reinforcement Learning”. Questions For Authors: How long is fine-tuning the diffusion policy with OTPR? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer ZmNb, Thank you for your positive review and valuable feedback on our submission. Your comments have provided us with clear directions for improvement, and we are committed to addressing them in the revised version of our paper. Below, we address your main concerns and questions. ## C1: Questions For Runtime **R1:** We thank the reviewers for their suggestion. Since all experiments were conducted under the same computational resource configuration, we directly provide the wall-clock time statistics for comparison in the Table of Supplementary Section 3 (https://anonymous.4open.science/r/OTPR_Supplementary-5F55/README.md). Our algorithm incurs approximately **8%–11%** more runtime compared to other baselines. This additional overhead stems from learning the dual term, but it is negligible considering the performance improvements achieved. ## C2: Weakness (Real World Experiments) **R2:** We sincerely appreciate the reviewer’s valuable feedback regarding the importance of real-world validation. We fully acknowledge that performance in simulated environments may not directly translate to real-world scenarios. However, due to the time constraints of the rebuttal period and the current resource limitations, we are unable to conduct real-world experiments at this stage. As demonstrated in DPPO deployments, policies trained in high-fidelity simulations can achieve zero-shot transfer to physical hardware without real-data fine-tuning. So we have justifiable confidence that our approach can achieve comparable or better zero-shot transfer performance under identical experimental settings. We are currently actively developing a simulation-to-real evaluation platform and commit to publishing empirical validation results by the Camera-Ready deadline. ## C3: Typos and Terminology **R3:** Thank you for catching the typo in Section 3.2. We will correct "Reinfrocement Learning" to "Reinforcement Learning" in the final version of the paper. We appreciate your attention to detail. Thank you again for your time and insightful comments. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications regarding runtime and future plans for real-world experiments. That said, I remain concerned about the limited experimental scope. While I understand real-robot evaluations may be constrained, the current benchmarks (kitchen, robomimic, CALVIN) are relatively standard and do not fully test the claimed benefits of OTPR in sparse-reward, high-dimensional, or long-horizon settings. There exist more challenging and diverse simulation tasks, such as Robomimic’s transport-pixel/state and furniture benchmarks compared with DPPO, that could have offered a more convincing demonstration. Without such experiments, it is difficult to fully assess the robustness and generality of the proposed method. I am currently keeping my score at 4, but I would strongly encourage the authors to include more diverse and challenging tasks in the final version. Otherwise, I may reconsider my recommendation. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your constructive feedback and for understanding the constraints on real-robot evaluations. We appreciate your suggestions to strengthen our work. In response to your concerns, we have conducted additional experiments to further demonstrate the robustness of OTPR: 1. We have already evaluated OTPR on the **pixel-based Robomimic task** during the rebuttal period (Figure 2 (https://anonymous.4open.science/r/OTPR_Supplementary-5F55/README.md). The experimental results clearly demonstrate that our method either significantly outperforms or achieves comparable performance to the next best baseline approach. 2. We further evaluated OTPR’s fine-tuning capabilities on **LIBERO-Long**, which underscores its effectiveness in long-horizon tasks. The success rate is shown on the Table of Supplementary Section 5 (https://anonymous.4open.science/r/OTPR_Supplementary-5F55/README.md). Due to time constraints, detailed results (including comparisons of different finetuning methods and more tasks) will be progressively updated on the anonymized GitHub repository. Full analyses and real-robot experiment will also be included in the camera-ready paper. We sincerely appreciate your guidance and hope these additions address your concerns. Thank you again for your support.
Summary: The paper proposes OTPR that leverages optimal transport for fine-tuning diffusion policy in RL. Q function is treated as the transport cost and the policy is considered the transport map. Masked OT with resampling is also applied to improve training stability. Experiment results show generally improved performance compared to other diffusion RL fine-tuning methods and demo-augmented RL methods. Claims And Evidence: The claim of improved performance is supported by results on the Franka-Kitchen, CALVIN, and Robomimic tasks, which are manipulation tasks and generally more challenging than dense-reward mujoco locomotion tasks. However, the paper lacks qualitative discussion on its effectiveness. I would vote for strong accept if there are qualitative demonstrations of improved performance from leveraging optimal transport, e.g., in a carefully designed toy problem. Methods And Evaluation Criteria: I appreciate the author considering more challenging RL benchmarks including the vision-based CALVIN. Theoretical Claims: I skimmed over the proofs in Appendix B and did not see any egregious error. Experimental Designs Or Analyses: I do not spot any particular issue with the experimental design. Supplementary Material: i reviewed the appendix; the proofs were skimmed over. Relation To Broader Scientific Literature: Diffusion RL fine-tuning is a very important area that requires active research as we have seen many successes in training diffusion policy with behavior cloning, and RL fine-tuning is critical to further improve the robustness. This paper bridges the gap between Ren et al. in online diffusion-based RL and many existing offline diffusion-based RL methods. Essential References Not Discussed: A missing reference is Psenka et al., Learning a diffusion model policy from rewards via q-score matching, which also considers online diffusion-based RL with Q function. Other Strengths And Weaknesses: The paper is well-written and figures (especially ones in the experiment section) are nicely done. Other Comments Or Suggestions: I suggest putting more experimental details, such as how the diffusion policies are pre-trained, in the beginning of experiment section. Questions For Authors: Did you try pixel-based robomimic tasks? Would also be interesting to try Transport from robomimic, which is significantly more challenging than the tasks considered. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer iuX1, Thank you for your positive review and constructive feedback on our submission. We appreciate your recognition of our work and are glad to hear that you found our approach and experimental results valuable. Below, we address your specific comments and questions. ## C1: Qualitative Experiments in Claims And Evidence **R1:** As suggested, we designed two illustrative 2D toy experiments to visually validate our method’s core components. Full visualizations are available in [Supplementary Section 1] (https://anonymous.4open.science/r/OTPR_Supplementary-5F55/README.md). ### 1. Compatibility Function Validation **Objective:** Validate the accuracy of the compatibility function to verify whether Algorithm 2 can effectively learn the dual term. **Setup:** We conducted experiments in a 2D space, where Algorithm 2 was applied to a random dataset consisting of a Gaussian source distribution and a multi-modal target distribution (8Gaussian), with the Euclidean distance serving as the cost function. **Figure 1.1 left:** We visualize the source distribution as colored level sets and the target distribution as randomly sampled points. **Result 1 (Figure 1.1 middle):** We constructed 200 sample points from the source distribution and 2000 samples from the target distribution, then paired them using the compatibility function H. The compatibility function H successfully **matches source samples (yellow) to target samples (green) with low cost**, confirming its effectiveness. **Result 2 (Figure 1.1 right):** Generated samples obtained by sampling from the source distribution. We learn an optimal map as a neural network by approximating the barycentric projection [1] of the OT plan from Algorithm 2. Generated samples closely match the target distribution. ### 2. OT-Guided Diffusion Policy Verification **Objective:** Assess diffusion policy’s ability to recover target distributions under OT guidance. **Setup:** We leverage 2 synthetic 2D datasets (8Gaussian and swissroll) used in [2] to further verify the effectiveness of OT guided diffusion policy. Each dataset contains data points paired with specific Q values (**Figure 1.2 left**). **Results (Figure 1.2 right):** The ultimate samples generated by the diffusion model, which closely match the ground-truth target distribution. [1] Seguy, Vivien, et al. "Large-Scale Optimal Transport and Mapping Estimation." ICLR 2018-International Conference on Learning Representations. 2018. [2] Lu, Cheng, et al. "Contrastive energy prediction for exact energy-guided diffusion sampling in offline reinforcement learning." International Conference on Machine Learning. PMLR, 2023. ## C2. Experimental Details **R2:** Thanks for the reviewer’s suggestion, we will supplement the more relevant details of the pre-trained diffusion policy in Section 6 or Appendix C if the page number is limited. *In the pretraining, the observations and actions are normalized to [0, 1] using min/max statistics from the pre-training dataset. No history observation (pixel, proprioception, or ground-truth object states) is used. The deffusion policy is trained with learning rate 1e-4 decayed to 1e-5 with a cosine schedule, weight decay 1e-6 and 50 parallelized. For Franka-Kitchen and Robomimic tasks, epochs is 8000 and batch size is 128; for CALVIN tasks, epochs is 5000 and batch size is 512.* ## C3. Missing Reference **R3:** We appreciate your suggestion to include **QSM** in our manuscript, which is closely related to our approach and offers valuable insights into the field. We will reference this paper appropriately in the Related Work section. ## C4. Additional Robomimic Task **R4:** We sincerely appreciate this constructive feedback. To address the request, we have conducted additional experiments on pixel-based tasks in Robomimic, including the Transport task highlighted by the reviewers. We used ResNet as the visual encoder, similar to the setup in CALVIN. The learning curves and comparisons with baseline methods can be found at (https://anonymous.4open.science/r/OTPR_Supplementary-5F55/README.md). As shown in Figure 2 from the linked results, our method still dominates or attains similar performance to the next best method. The results align with our original pixel-based CALVIN experiments, further validating our framework’s capability to handle visual inputs while preserving offline-to-online finetuning strengths.
Summary: This paper proposes to reformulate offline-to-online diffusion policy training with optimal transport. It views policy as a transport from the state distribution to action distribution, using the (negative) Q-function as a a transport cost and treating the policy as an optimal transport map. The authors show that the score matching objective of the diffusion policy training can be augmented with a weighting function that constitutes the joint distribution of state and action pairs. They further show that this weighting function can be relaxed with a compatibility function that involves the Q function as well as some dual variables $u_w(\mathbf{s})$ and $v_w(\mathbf{a})$ derived from the dual form of regularized optimal transport. As this relaxed weighting function gives zero weight to state-action pairs where the state is in expert demonstrations while the action is not, they coin their objective Masked Optimal Transport. The authors also did an analysis to illustrate that the the proposed training algorithm is optimizing an upper bound of the distance between the diffusion policy and the optimal transport plan. Their experiments demonstrate clear offline-to-online improvement. ### After rebuttal ### The authors addressed my major concerns. I updated my rating accordingly. Claims And Evidence: The major claim is that the OT perspectives help bridge diffusion policy with RL, which seems to be only weakly supported by showing the OT is using the (negative) Q function as a cost function. However, this connection appears to be fairly superficial since which Q learning method is used and what policy induces its Q value is not discussed. Methods And Evaluation Criteria: Viewing policy learning as an optimal transport from state distribution to action distribution makes sense conceptually. However, how to define the transportation cost is tricky. The proposed method has an implicit assumption that the key states, i.e. states covered by expert demonstrations, can only be paired with the associated actions in demonstrations. This appears to be a restriction to generalization. The evaluation criteria, i.e. the learning curves and the final performance, are standard in the literature. Theoretical Claims: The proofs make sense to me at a high level. But I didn't check the details. Experimental Designs Or Analyses: The authors conducted experiments in online funetuning on 6 RL tasks, which seems not very sufficient. Fig 2 shows the proposed method has significant effect in most of them. They also compare with methods that use both offline data that are not necessarily optimal and expert data in offline training and online finetuning. The proposed method appears to be the only effective one in offline training, and performance is boosted after online finetuning. Supplementary Material: N/A Relation To Broader Scientific Literature: The problem of offline-to-online learning is crucial for robotics. This paper is an attempt in this direction. Essential References Not Discussed: N/A Other Strengths And Weaknesses: \+ The ablation of different compatibility method clearly illustrate the effectiveness of the proposed H function. \- The formulation appears to be complicated. I appreciate the authors' effort in introducing the perspectives of optimal transport to diffusion policy. But the exposition of the paper involves non-intuitive formal notions, making it less accessible. The proposed algorithm also look complicated. At each iteration, the dual variables, which are MLPs need to be "optimized" as the first step of learning. \- The proposed masked OT does not introduce significant gains according to FIg3. Other Comments Or Suggestions: The notations appear to be very unorganized. For example, $\mathbf{a}$ and $a$ are used interchangeably in Section 3.2. There is a typo in Eq 11. Questions For Authors: The mask scheme that only allows states in the expert demonstration to be matched with the actions appears to be pretty restrictive. Does that mean the model won't discover any actions that are equivalently good to actions that appear in the demonstration? How is the offline learning in Table 1 performed? Why does the proposed method performs well in offline learning while most of the baseline methods failed? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, Thanks for your thorough review and constructive feedback on our submission. ## C1. Claims: *The connection between OT and RL appears superficial, and the role of Q-learning methods requires clarification.* **R1:** While the cliam has already garnered recognition from other reviewers (e.g. *OTPR unifies insights from generative modeling, reinforcement learning...* by Reviewer ZmNb), we provide these focused responses to your concerns: **Theoretical Equivalence:** To support this claim, we have provided proof in the Appendix B.1 demonstrating the equivalence between the OT plan and the optimal policy. This theoretical result forms a solid foundation for our approach and shows that the connection is not merely superficial. **The Relevance of Q as cost:** The use of the Q-function as a transport cost has built on prior static offline RL works (e.g., [1]). Our work uniquely integrates this idea with diffusion policy to enable smooth offline-to-online finetuning. **Q-Learning Compatibility:** OTPR is an algorithm-agnostic framework incorporating any existing Q learning algorithms. [1] Asadulaev, Arip, et al. "Rethinking Optimal Transport in Offline Reinforcement Learning." Advances in Neural Information Processing Systems (2024) ## C2. Generalization: *The mask appears to be a restriction to generalization.* **R2:** We appreciate the reviewer’s attention to generalization capabilities. We believe the authors have understood that our algorithm consists of two parts: (1) estimating the OT plan to provide a compatibility function H(s,a), and (2) using H to guide the diffusion policy optimization. The mask is introduced during (1) and designed to fully leverage the paired state-action data from expert demonstrations to improve the accuracy of $H$. The effectiveness of this key-point guidance approach has been validated in some domain adaptation studies. Crucially, the mask does not constrain the policy’s action space. The mask focuses only on actions that are known to be effective in specific seen states, as demonstrated by expert data, and does not directly influence the learning of the policy and inference. During online fine-tuning, the policy can freely explore novel actions in both seen and unseen states. ## C3. Additional Experiments: **R3:** We additionally added two 2D qualitative toy experiments and three pixel-based Robomimic tasks results in https://anonymous.4open.science/r/OTPR_Supplementary-5F55/README.md. See response to reviewer iuX1 for a detailed description. ## C4. Ablation Results of Masked OT: **R4:** The masked OT mechanism is designed to refine the estimation of the OT plan by leveraging expert demonstrations as high-confidence priors. Its contributions are twofold: (1) reducing the occurrence of poor solutions, and (2) improving overall stability across multiple fine-tuning evaluations. As shown in Figure 3, these improvements are captured by the consistent trend in mean performance and the reduction in variance (tighter confidence intervals). We also supplement additional ablation experiments (Figure 4 in https://anonymous.4open.science/r/OTPR_Supplementary-5F55/README.md). ## C5. Notation Suggestion **R5:** Our intention was to use **bold font** to denote variables, while regular font indicates a specific sampled instance of that variable, following [1]. This distinction also helps differentiate between actions at each timestep and those at each denoising step. We acknowledge that this notation might have caused confusion, and we will unify the font style and fix typo errors in the revised manuscript. [1] Ada, et al. "Diffusion policies for out-of-distribution generalization in offline reinforcement learning." IEEE Robotics and Automation Letters ## C6. Questions for Authors **R6:** **Masking Scheme and Action Discovery** As we mentioned in our response to the Generalization Concerns (R2), the Mask mechanism is introduced to assist in estimating the OT plan. Nevertheless, it is the compatibility function H(s,a) that truly evaluates the sampled actions and guides the optimization of the diffusion policy. The computation of H is a comprehensive process that includes both the potential functions u and v, along with the Q-values, ensuring robustness. Actions with equally high Q-values are also permitted to achieve correspondingly high H-values.Thus, this does not preclude the model from discovering other good actions. **Demo-augmented RL Algorithms Performance** Similar reproduction results were also observed in the DPPO paper. RLPD is an online RL algorithm leveraging offline data. Since it does not involve a pre-train process, we set its offline performance to 0. For IBRL, we strictly adhered to the original implementation protocol and its behavioral cloning objective during the offline training stage. Its offline performance is bad, which may be attributed to the presence of noise and multi-modality of data.
null
null
null
null
null
null
NMA-tune: Generating Highly Designable and Dynamics Aware Protein Backbones
Accept (poster)
Summary: NMA-tune is a new method that incorporates dynamic information into protein design by conditioning backbone generation on the lowest normal mode of oscillation. It extends RFdiffusion as a plug-and-play component, improving the proportion of samples with high structural quality and desired dynamics. The approach addresses challenges in generating molecules with both designable quality and targeted motions. Molecular Dynamics simulations confirm the presence of the targeted modes. Claims And Evidence: In the introduction part, the author argued that NMA-guidance has relatively poor performance because of the balance between conditional and unconditional terms. However, it is hard to tell that the model gains performance because of trainable conditioning networks since additional loss terms are introduced in Section 4.2. Experimental evidence is needed to support this. Besides, even if they claim they could adjust the unconditional and conditional terms, they are still upscaling the conditional term in the experiments. Methods And Evaluation Criteria: The novelty of the methodology lies in its ability to decouple the conditional score from the large diffusion model during training, eliminating the need to backpropagate through RFdiffusion. Additionally, the evaluation metrics currently only consider designability and sc-cossim; it would be important to also report on novelty and diversity. Moreover, the experiments are limited to only four targets, which may constrain the robustness of the findings presented in the paper. Theoretical Claims: In Eq. 6 and Eq. 7, the author mentions that the Jacobian term can be disregarded. I am curious about how the model can accommodate this, considering that ( x_t ) is not input into the correction network. A more theoretical discussion on this topic would be appreciated. Experimental Designs Or Analyses: The results presented in Section 4 are quite limited. Additionally, the section on evaluation metrics is poorly articulated; it only includes two metrics, which could be effectively conveyed by simply stating the relevant thresholds. In Section 5, the authors have altered the threshold for sc-RMSD for two targets, which compromises the fairness of the experiment. It would be more appropriate to select other, more suitable targets for experimentation. Furthermore, the results in Table 1 indicate that the performance of “tune” is significantly inferior to that of “guid” on the designable metric. The authors need to demonstrate that this discrepancy is not due to an improperly tuned scaling factor affecting conditional and Supplementary Material: I have carefully reviewed the Appendix. Relation To Broader Scientific Literature: This paper builds significantly upon the earlier work titled “Dynamics-informed Protein Design with Structure Conditioning,” presented at ICLR 2024. Many of the theoretical concepts are derived from that foundational research. Essential References Not Discussed: The authors have provided a fair related work discussion section, however, some of the statements are not correct. For example, NMA-guidance also has a trainable GVP based model for denoising. A short literature review of condition guided diffusion model papers is missing here too. Other Strengths And Weaknesses: Weaknesses: The paper lacks organization, with an excessive amount of pages devoted to background and related work. It would be more effective to retain only the essential elements of this content in the main text. Other Comments Or Suggestions: Table 2 and Table 3 should have a head row to show which model is being used. Questions For Authors: All questions are listed in the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for carefully going through our manuscript and giving us your constructive feedback. As you mentioned, we tackle “challenges in generating molecules with both designable quality and targeted motions”, and we are happy to see you note the strength of the MD simulations evaluations. Let us address your questions one by one. 1. “[...] the results in Table 1 indicate that the performance of “tune” is significantly inferior to that of “guid” on the designable metric. The authors need to demonstrate that this discrepancy is not due to an improperly tuned scaling factor affecting conditional and …” (though the sentence seems unfinished, we guess it was supposed to say “conditional and unconditional terms”) It seems to us that a couple of your concerts across your review sections are strongly related to this final point you make. Please note that the final criterion for assessing the method’s performance is a simultaneous optimisation of both designability and NMA-loss. The guidance scale of NMA-guidance was fine-tuned for optimal performance in both of those metrics, and it struggled to achieve the balance between conditional and unconditional terms. Fine-tuning even the most advanced protein generative models is often a necessity (e.g. Proteina fine-tunes CFG and auto-guidance weights [1]). While NMA-tune also requires fine-tuning, it achieved much better balance in those metrics than NMA-guidance, which further motivates the usage of the trainable component. [1]. Proteina: Scaling Flow-based Protein Structure Generative Models, https://arxiv.org/abs/2503.00710 2. “Additionally, the evaluation metrics currently only consider designability and sc-cossim; it would be important to also report on novelty and diversity.” Please take a look at our response to Reviewer FHJ7, where we provide those metrics. 3. “In Section 5, the authors have altered the threshold for sc-RMSD for two targets, which compromises the fairness of the experiment” Since NMA-tune and NMA-guidance are based on the same base model RFdiffusion, which struggles exactly in the same way to perform structure conditioning of the difficult targets, we believe the comparison of two dynamics-conditioning methods remains fair even with the adjusted sc-RMSD criterion. Ideally, we would evaluate using a standardised benchmark of dynamical-motifs, like RFdiffusion benchmark for static motifs. Currently, no such benchmark exists, and the number of targets we can use is limited. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. I have raised my scores. --- Reply to Comment 1.1.1: Comment: As the discussion period comes to an end, we would like to thank all the reviewers for the great effort they put into reviewing our work and for appreciating the scientific contributions of NMA-tune.
Summary: The paper introduces NMA-tune, a plug-and-play modification to the RFDiffusion framework aimed at enhancing protein design by integrating Normal Mode Analysis (NMA)-inspired diffusion conditioning correction. The proposed method introduces a computationally efficient conditioning term that utilizes the fully denoised RFDiffusion sample prediction to correct the diffusion trajectory. The authors demonstrate an improvement in motif scaffolding tasks and analyze the approach’s performance from a Molecular Dynamics (MD) perspective. Claims And Evidence: The claims regarding improved motif scaffolding are supported by experiments involving three case-study proteins with well-documented hinge motions. However, the interpretation of the MD analysis results is not entirely clear, and additional references to tables and figures would strengthen the conclusions. Methods And Evaluation Criteria: The proposed method aligns well with the problem statement. Still, a direct comparison with alternative motif scaffolding approaches, such as SMCDiff [a], would further substantiate the claim of performance improvement. a. Diffusion probabilistic modeling of protein backbones in 3D for the motif-scaffolding problem, Trippe et al., 2022 Theoretical Claims: The mathematical description and algorithm appear correct. Experimental Designs Or Analyses: The experimental section is strong. The authors present three case-study proteins with well-documented hinge motions in the literature. However, the results from the MD analysis could be better explained to highlight key findings. Furthermore, the paper would benefit from a direct comparison with other motif scaffolding approaches to establish a more comprehensive baseline. Supplementary Material: The Appendix was reviewed for details on method details and sampled protein structures. Relation To Broader Scientific Literature: The paper advances motif scaffolding and plausible protein structure generation by introducing a plug-and-play conditioning correction for RFDiffusion, guiding diffusion toward the lowest non-trivial normal mode. This novel and original approach opens a new direction for integrating physical constraints into generative models. Essential References Not Discussed: All essential references are discussed. Other Strengths And Weaknesses: The paper is well-written but not always easy to follow. While the method description is clear, additional intuition and explanations for some experimental choices would improve clarity. Including more visualizations of generated proteins in the main text would enhance the presentation. Other Comments Or Suggestions: I suggest to: 1. Include column descriptions in Tables 2 and 3, as they are not immediately obvious and require searching within the text. 2. Revise the MD analysis section to highlight key findings and provide references to tables for clarity. 3. Compare NMA-tune with other motif scaffolding approaches, such as SMCDiff, to strengthen the empirical evaluation. Questions For Authors: All suggestions were stated in other sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for providing your constructive critique of our work. We are happy to hear that you appreciate the strong points of our paper, particularly you note “the experimental section is strong. The authors present three case-study proteins with well-documented hinge motions in the literature.” As you suggest, in the future we would like to expand NMA-tune to operate with other motif scaffolding approaches as well. While it is not feasible in this short rebuttal period, and the strength of our method lies in dynamics rather than just structure conditioning, using other motif scaffolding models would be a great contribution to the community. We will revise for clarity the Tables’ descriptions and the findings of the MD evaluation. Please let us know if you have any direct comments on the writing style of the results Section. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I keep my current score. --- Reply to Comment 1.1.1: Comment: As the discussion period comes to an end, we would like to thank all the reviewers for the great effort they put into reviewing our work and for appreciating the scientific contributions of NMA-tune.
Summary: This paper introduces a training-based method to address the problem of dynamic-conditioned generation of proteins. Specifically, they replace the prior-guided term with a simpler, more computationally efficient one to improve sampling speed, and they introduce a small network to learn such conditioned mappings. ## update after rebuttal I keep my original rating. Claims And Evidence: The authors claim that their tuning-based method offers superior performance and accelerates sampling. However, as shown in Table 1, the designability is notably worse compared to the guided method. Furthermore, there are no metrics provided that consider sampling speed. Given that the tuning-based method incurs additional training costs, I am not convinced that it is a better strategy than the guided approach. Methods And Evaluation Criteria: I question the necessity of training a network for this purpose. Moreover, the evaluation is conducted on a relatively small dataset, which may not accurately reflect real-world performance. Theoretical Claims: I have reviewed the theoretical claims and find them to be correct. Experimental Designs Or Analyses: I have examined the experiments and compared them with prior works: 1. I am concerned about the performance gap between the tuning-based method and the prior guidance method. Although the paper asserts that the learned guidance term is better, the actual performance is worse. 2. This paper evaluates methods on a very small subset of dynamic proteins, whereas prior work uses a much larger dataset (10,037 protein structures). I recommend evaluating the proposed method on this larger dataset. 3. It would be valuable to include more methods for comparison. For instance, beyond RFDiffusion, other protein generative models such as Frameflow, Framediff, and Genie (as used in the NMA-Guidance paper) should be considered. Additionally, I suggest including other guidance methods, such as classifier-based, classifier-free, and loss-guidance methods. 4. In the MD-evaluation experiment, there is a lack of baselines. 5. I recommend incorporating more dynamic-related metrics in addition to protein design metrics, since the paper claims to achieve more accurate dynamic-conditioned generation. Supplementary Material: I have read and verified the supplementary material. Relation To Broader Scientific Literature: The idea of using loss-guidance is well-established (Song et al., ICML 2023), but this method employs a trainable network to approximate this term. The study of dynamic-conditioned protein generation has been explored in the NMA-guidance paper (Urszula et al., ICLR 2024). Essential References Not Discussed: None. Other Strengths And Weaknesses: 1. The writing and presentation need significant improvement. Sections 3 and 4 are particularly difficult to read, and there is no main figure in the method section. 2. The intuition behind using a trainable network is unclear, and the results do not sufficiently support the claims. 3. The experiments lack numerous baselines and should be conducted on a larger dataset. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for providing your valuable feedback. Firstly, let us motivate again the usage of the trainable conditioner in NMA-tune. The analytical form of the NMA-loss that we use for loss-guidance might steer the generation into structures that have the ideal NMA-loss, but do not resemble proteins at all. Whether the balance between conditional and the unconditional terms can even be achieved under this loss formulation is the question we are tackling in this work. Most importantly, we show that NMA-tune achieves this balance, and optimises a number of metrics simultaneously, while NMA-guidance without a trainable conditioner struggles to do so. Next, let us address the other crucial points you make. 1. “However, as shown in Table 1, the designability is notably worse compared to the guided method.” The designability as measured by sc-RMSD is an important metric to understand the effects of conditioning. However, the most important metric that determines the method’s success is not just the number of designable samples, but the number of designable and successfully conditioned samples as measured by sc-RMSD and sc-cossim. **It is not enough to obtain a high quality sample - a sample must be both designable and meet the conditions,and with NMA-tune we show that we can achieve designability whilst meeting these conditions.** 2. “Furthermore, there are no metrics provided that consider sampling speed.” Please see our response to Reviewer FHJ7, where we provide the comparison of sampling speed, and show NMA-tune is much faster than NMA-guidance. “Moreover, the evaluation is conducted on a relatively small dataset, which may not accurately reflect real-world performance” and “This paper evaluates methods on a very small subset of dynamic proteins, whereas prior work uses a much larger dataset (10,037 protein structures).” **Please note that NMA-guidance does not evaluate the conditioning method on 10,037 structures.** Authors of NMA-guidance train their own unconditional protein generative model on 10,037 structures, while we use the already established and thoroughly evaluated RFdiffusion as our unconditional model. For the evaluation of the dynamics-conditioning on its own, the authors use 600 randomly selected targets, which roughly matches the number of samples we take for our carefully selected, real world targets. For that evaluation part, NMA-guidance also uses a different, less restrictive NMA-loss formulation (disregarding orientation of the conditioning residues). Regarding your concern that our method might not translate into real-world performance, please note that our MD evaluation is designed to tackle exactly this question. MD simulations are regarded as a very close proxy to the real-world behaviour of molecules, and to the best to our knowledge there are no better in silico approximations to the real-world motions than MD simulations. In our experiments Section, we show that the target motion is indeed present in the MD trajectory, therefore showing the success will be translated to real life. 3. “In the MD-evaluation experiment, there is a lack of baselines.” Our MD simulation experiments were designed to prove the link between presence of the target normal mode in the designed sample and the MD trajectory (close approximation to real life). Most importantly, they show that our *in silico* metrics (in which NMA-tune performs best) translate to real life, and therefore any designable, high quality and successfully dynamics-conditioned sample will exhibit the targeted motion in the MD trajectory. This is the crucial link for assessing the method’s performance in biological tasks, but since 1) the link *in silico* metrics and MD trajectory is proven; 2) NMA-tune performs best as measured by *in silico* metrics; and 3) MD simulations are incredibly computationally expensive (one simulation can take about a day) and we have limited bandwidth, we deemed it sufficient to include only NMA-tune in the MD evaluation part.
Summary: The paper aims to propose a solution to conditioning protein structure generation on structural dynamics. The authors define protein structure dynamics as the lowest normal modes of oscillations computed with Normal Mode Analysis (NMA) and propose an efficient strategy to incorporate this information into existing generative models for protein structure. In particular, the authors demonstrate that pretrained, unconditional diffusion-based generative models, such as RFdiffusion, can be turned into dynamics-conditional ones via loss guidance without requiring retraining or fine-tuning, and call their framework NMA-tune. The authors extend the work of Komorowska et al. (2024) and introduce an SO(3)-equivariant conditioner, which learns the conditional loss guidance term in contrast to the analytical approximation proposed previously in Komorowska et al. (2024). The conditioner essentially learns the set of translations that need to be applied to the unconditional noise in order to sample from the conditional probability distribution. The authors also propose a training strategy along with an SE(3)-invariant NMA-loss for their conditioner to solve the problem of generating structures with certain functional motifs that encode a movement of interest. In a series of experiments, the authors demonstrate the efficiency of their method and outperform a baseline. ## Update after rebuttal I will keep my score. I believe this work makes an interesting contribution and consistently achieves better performance on relevant metrics. Claims And Evidence: Thoroughly conducted experiments convincingly demonstrate that the proposed method outperforms a previously published baseline. The authors report informative metrics on conditionally generated protein structures, such as secondary structure composition, and assess the effect of conditioning on the well established designability metric. They justify the selection of the proteins used in experiments, conduct extensive Molecular Dynamics (MD) simulations to cross-validate recapitulation of the NMA-derived lowest modes passed as a condition, and clearly state their limitations. Methods And Evaluation Criteria: Yes. Although NMA, while being computationally efficient, only captures types of harmonic motions about the equilibrium state, which might not be sufficient to condition on biologically relevant, functional directional movements [1], such as state-transitions performed during ligand binding [2] or catalysis [3]. The authors acknowledge this limitation and compare their results with MD simulation-derived lowest modes in different time windows (Tables 2 and 3). To the best of my knowledge, no precise and computationally efficient methods exist to generate and incorporate reliable mid- to long-range protein structure dynamics during training of the conditioner. Thus, the motivation to use NMA is clear. [1] Alexandrov, Vadim, et al. ”Normal modes for predicting protein motions: a comprehensive database assessment and associated Web tool.” Protein science 14.3 (2005): 633-643. [2] Deng, Hong, Nick Zhadin, and Robert Callender. ”Dynamics of protein ligand binding on multiple time scales: NADH binding to lactate dehydrogenase.” Biochemistry 40.13 (2001): 3767-3773. [3] Schramm, Vern L. ”Enzymatic transition states and transition state analogues.” Current opinion in structural biology 15.6 (2005): 604-613. Theoretical Claims: Although the authors mostly adopt theoretical foundations from previously published work Komorowska et al. (2024), they propose to learn the conditional loss term directly from the predictions of denoised protein structures made by the generative model via a graph neural network. The authors demonstrate that this approach performs better than estimating the guidance term analytically as in Komorowska et al. (2024). The goal is to sample from the conditional posterior p(y | x0) , where y encodes the eigenvectors of a Hessian matrix from the NMA computation. Instead of learning the eigenvectors directly, the authors propose to learn a correction term to the unconditional noise which best matches the eigenvectors of the precomputed lowest modes. Experimental Designs Or Analyses: • Table 1: For reproducibility, I suggest adding full inference settings of RFdiffusion beyond the noise scale (e.g., number of timesteps). It would be informative to include novelty and diversity metrics (see example definitions in [4]) for generated protein structures. Lower designability of NMA-tune compared to unconditional generation might stem from novel backbone geometries. • Figures 2 and 3: Report Cα-Cα distances and secondary structure distributions without filtering for designability. I would like to see distributions for only designable samples. • Inference Time: Absolute inference time for each sampling method would be informative. • Section 5.2: MolProbity and SwissModel scores may be unfamiliar to some readers. An appendix explaining these metrics would be useful. • Figure 1: Good visualization of the sc-cossim, but the colors of the arrows should be adjusted for better readability, especially in Figure 1(b), where it is difficult to see the direction of the arrows. [4] Yim, Jason, et al. ”Fast protein backbone generation with se (3) flow matching.” arXiv preprint arXiv:2310.05297 (2023). Supplementary Material: Yes, I reviewed the appendices. Relation To Broader Scientific Literature: To the best of my knowledge, no existing generative models condition protein structure generation on dynamics observables. The only relevant baseline is NMA-guidance, which inspired the refinement proposed in this paper. Essential References Not Discussed: Another recent work which focuses on high designability perhaps worth mentioning is Wagner et al., Generating Highly Designable Proteins with Geometric Algebra Flow Matching, Neurips 2024. Other Strengths And Weaknesses: Strengths: • The paper is very well-written with smooth transitions between sections. The authors state the limitations of the proposed method. Weaknesses: • The novelty of the theoretical contribution is limited. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reading our manuscript in great detail. We are glad to know that you find our work well-written and you appreciated the strength of our experimental evaluation. Thank you for pointing out that the “conducted experiments convincingly demonstrate that the proposed method outperforms a previously published baseline”. We are grateful for your feedback on how to improve the clarity of the manuscript. Please note that during the rebuttal period there is no option to upload a revised version of the paper, but we certainly will implement those changes in the future version of the manuscript. Nevertheless, we are happy to report the extra evaluations you would like to see here. 1. “Report Cα-Cα distances and secondary structure distributions without filtering for designability.” We recalculated the results presented in Figures 2 and 3 in the Appendix B, but using only designable samples. The number of designable samples is much lower than the total number of samples, therefore we recomputed results using designable samples from all 3 seeds (for the Figures 2 and 3 we used 110 samples). The $C_{\alpha}$- $C_{\alpha}$ distances and secondary structure distributions are now as follows: For 1hhp assembly $C_{\alpha}$ dist (A): NMA-guid.: 3.7679 ± 0.0075, NMA-tune: 3.7616 ± 0.0080, Uncond.: 3.7679 ± 0.0075 ### Secondary Structure Usage | | NMA-guid. | NMA-tune | Uncond. | |--------|--------------------|------------------|------------------| | Alpha | 0.5338 | 0.4286 | 0.5532 | | Beta | 0.2689 | 0.3378 | 0.2539 | | Coil | 0.1974 | 0.2337 | 0.1929 | For 1exr $C_{\alpha}$ dist (A): NMA-guid.: 3.7767 ± 0.0032, NMA-tune: 3.7780 ± 0.0025, Uncond.: 3.7803 ± 0.0033 ### Secondary Structure Usage | | NMA-guid. | NMA-tune | Uncond. | |--------|--------------------|------------------|------------------| | Alpha | 0.8846 | 0.8783 | 0.8903 | | Beta | 0.0027 | 0.0013 | 0.0015 | | Coil | 0.1127 | 0.1204 | 0.1082 | Changes in the above statistic follow the same patterns as when calculated using all samples. $C_{\alpha}$ distances remain in the realistic range both for NMA-tune and NMA-guidance, and again both methods slightly disturb the secondary structure statistics, which is expected since conditioning induces a distribution shift. 2. “Absolute inference time for each sampling method would be informative.” We calculated the running time of the sampling loop (with the default RFdiffusion number of time steps equal to 50) for NMA-tune and NMA-guidance. Mean loop running time for each target on our Nvidia A100 80GB machine, averaged over 110 samples per target, was about 36 seconds for NMA-tune, and about 63 seconds for NMA-guidance, **which gives about 75% speedup**. 3. “It would be informative to include novelty and diversity metrics” According to your suggestion, we use Foldseek [1] and MaxCluster [2] to evaluate novelty and diversity for two targets: 1exr and 1hhp_assembly. For novelty, we compute theTM-score to AFDB and PDB100 databases available at Foldseek server, and for each sample retain the max score. We then report the mean of those max TM-scores (the lower, the more novel samples). | Novelty (TM-scores) | 1exr, $\eta$=0.0 | 1exr, $\eta$=1.0 | 1hhp_a, $\eta$=0.0 | 1hhp_a, $\eta$=1.0 | |--------------|----------------|----------------|---------------------|---------------------| | NMA-tune | 0.68 | 0.64 | 0.52 | 0.53 | | NMA-guid. | 0.71 | 0.68 | 0.57 | 0.55 | While it seems that NMA-tune outperforms NMA-guidance by a narrow margin, novelty of both methods remains in the range comparable to other generative models (FrameDiff [3], FrameFlow [4]). We calculate diversity using MaxCluster with hierarchical clustering (single linkage method), in sequence independent mode, with a TM-score threshold 0.6. From the set of all 110 samples generated per target per noise scale for one seed, we take samples for $\eta=0.0$ and $\eta=1.0$ together, and discard the non-designable samples. Results for the remaining designable samples are as follows (clusters / num of designable samples): NMA-tune: 1exr target: 13/118; 1hhp_assembly target: 32/34 NMA-guidance: 1exr target: 10/137; 1hhp_assembly target: 52/91 As expected, diversity depends on the scaffolding target. Neither of the methods collapses to sample from a single cluster. [1] https://www.nature.com/articles/s41587-023-01773-0 \ [2] https://www.sbg.bio.ic.ac.uk/maxcluster/ \ [3] https://arxiv.org/pdf/2302.02277 \ [4] https://arxiv.org/pdf/2302.02277 --- Rebuttal Comment 1.1: Comment: I thank the authors for their reponse. There seems to be a slight trend towards incorporating higher amount of coils in the scaffolds upon conditioning. It would be interesting to see a plot or a table with scRMSD for the whole scaffold and a motif for both NMA-tune ad NMA-guide compared with the average scRMSD of unconditional samples. This could help assessing whether sc-metrics are just borderline above the thresholds. I keep my positive rating. --- Reply to Comment 1.1.1: Comment: Thank you for pointing out that we should check whether the designable samples are not too close to the acceptance threshold. We performed a sanity check of designable samples for two targets at two $\eta$ values, and computed median sc-motif-RMSD and sc-RMSD. We observed that RMSD values are often safely lower than the acceptance threshold, and we confirmed that designable samples are not just 'barely' designable. In the tables we report median values in Angstrom. ### 1hhp $\eta$=0.0 | Method | scRMSD | sc-motif-RMSD | |------------|---------------|-----------------------| | NMA-tune | 0.864 | 0.684 | | NMA-guid | 0.706 | 0.645 | | uncond | 0.678 | 0.667 | ### 1hhp $\eta$=1.0 | Method | scRMSD | sc-motif-RMSD | |------------|---------------|-----------------------| | NMA-tune | 0.935 | 0.688 | | NMA-guid | 0.816 | 0.737 | | uncond | 0.776 | 0.720 | ### 1exr $\eta$=0.0 | Method | scRMSD | sc-motif-RMSD | |------------|---------------|-----------------------| | NMA-tune | 0.714 | 0.406 | | NMA-guid | 0.732 | 0.411 | | uncond | 0.658 | 0.381 | ### 1exr $\eta$=1.0 | Method | scRMSD | sc-motif-RMSD | |------------|---------------|-----------------------| | NMA-tune | 0.840 | 0.395 | | NMA-guid | 0.889 | 0.408 | | uncond | 0.771 | 0.342 | Finally, we would like to thank you for the effort you put in reviewing our work, and for appreciating the scientific contributions of NMA-tune.
null
null
null
null
null
null
Going Deeper into Locally Differentially Private Graph Neural Networks
Accept (oral)
Summary: This paper presents UPGNet, a utility-enhanced framework for locally private graph learning. The main contribution is a three-stage pipeline that generalizes local differential privacy protocols for perturbing node features, aiming to balance privacy preservation with improved learning utility. The authors identify two key factors influencing utility: feature dimension and neighborhood size. To address these, they propose two novel components: the Node Feature Regularization (NFR) layer, which reduces the effective feature dimension, and the High-Order Aggregator (HOA) layer, which mitigates over-smoothing by expanding the effective neighborhood size. The theoretical analysis and experimental results validate the effectiveness of the proposed framework in achieving a balance between privacy and utility in graph learning tasks. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, I checked Experimental Designs Or Analyses: I reviewed the experimental designs and analyses presented in the paper, particularly those evaluating the performance of the proposed UPGNet framework. The experimental setup includes comprehensive testing on four benchmark datasets and compares UPGNet against several baselines. The experimental results validate the effectiveness of the proposed framework in achieving a balance between privacy and utility in graph learning tasks. Supplementary Material: Yes, I reviewed parts of the supplementary material, specifically the appendices that provide additional details on the theoretical analysis and algorithmic aspects of the paper. Relation To Broader Scientific Literature: The key contributions of this paper are closely related to the broader scientific literature on privacy-preserving machine learning, particularly in the context of Graph Neural Networks (GNNs) and local differential privacy (LDP). The paper's key contributions pushes forward both the theoretical and practical aspects of applying LDP to GNNs. Essential References Not Discussed: The citations and related work in the paper are comprehensive. Other Strengths And Weaknesses: Strengths: (i)Originality and Innovation: The problem of this paper is interesting and significant, and the method proposed by the authors significantly improves the utility of privacy-preserving graph learning without compromising privacy. (ii)Clear Theoretical and Experimental Support: The paper is well-structured, with a solid theoretical foundation to justify the proposed methods. (iii)Practical Relevance: The proposed methods and their demonstrated effectiveness in privacy-preserving settings could have broad applications, making the research practically significant. Weaknesses: (i)Provides more description of the node categorization task. This paper conducts experiments based on the node classification task and verifies the effectiveness of the UPGNet method in this paper in terms of utility enhancement compared to other methods. Providing more specific instructions on the node classification task is necessary. (ii)The theorem to be referenced should have been presented earlier in the text. (iii)Equation (2) in the paper uses the superscript $k$ to indicate the layer, while Equation (7) uses the superscript $(k)$ in a slightly different notation. This inconsistency in formalization can confuse readers, making it harder to follow the mathematical derivations. It is suggested that the authors review the formalization of these equations and ensure that the notation for layers is consistent throughout the paper. Other Comments Or Suggestions: see the weaknesses Questions For Authors: (i)How does the HOA layer in the paper differ from multi-hop graph structures? (ii)How is the effectiveness of NFR in improving utility validated in the experiments? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments and suggestions, which have significantly contributed to improving the quality of our paper. Detailed responses to each comment are provided below. **Q1: Provides more description of the node classification task. This paper conducts experiments based on the node classification task and verifies the effectiveness of the UPGNet method in this paper in terms of utility enhancement compared to other methods. Providing more specific instructions on the node classification task is necessary.** **R1:** Thank you for your valuable suggestion. A more detailed description of the node classification task, along with its formal definition, will be included in the revised manuscript. The node classification task [1][2] is a fundamental problem in graph learning, where the objective is to predict the labels of nodes based on the graph structure and available node features. Given a graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ is the set of nodes and $\mathcal{E}$ is the set of edges, each node $v_i \in \mathcal{V}$ is associated with a feature vector $\mathbf{x}_i \in \mathbb{R}^d$ and may have a corresponding label $y_i$ from a predefined label set $\mathcal{Y}$. The goal is to learn a function $f: \mathcal{V} \to \mathcal{Y}$ that assigns labels to unlabeled nodes based on their attributes and graph connectivity. > [1] Kipf, Thomas N., and Max Welling. "Semi-Supervised Classification with Graph Convolutional Networks." ICLR. 2017.\ [2] Bhagat, Smriti, Graham Cormode, and S. Muthukrishnan. "Node classification in social networks." Social network data analytics (2011): 115-148. **Q2: The theorem to be referenced should have been presented earlier in the text.** **R2:** Thank you for your suggestion. The manuscript will be revised to ensure that the referenced theorem appears earlier in the text, improving the logical flow and readability. **Q3: Equation (2) in the paper uses the superscript $k$ to indicate the layer, while Equation (7) uses the superscript $(k)$ in a slightly different notation. This inconsistency in formalization can confuse readers, making it harder to follow the mathematical derivations. It is suggested that the authors review the formalization of these equations and ensure that the notation for layers is consistent throughout the paper.** **R3:** Thank you for pointing out this inconsistency in our notation. The notation for layer indices has been carefully reviewed and unified throughout the paper to improve clarity and consistency. Specifically, we now consistently use the superscript [$\cdot^k$] across all equations to denote the $k$-th layer in our model. **Q4: How does the HOA layer in the paper differ from multi-hop graph structures?** **R4:** Thank you for your question. The key difference between the High-Order Aggregator (HOA) and traditional multi-hop graph structures lies in the way information is aggregated across different neighborhood scales. While multi-hop methods explicitly expand the receptive field to fixed-hop neighborhoods, HOA dynamically balances information across different hops, ensuring that the energy ratio $\Phi_K$ (Equation 8 in Section 3.2.2) approaches 0 as $K \to \infty$, which reduces over-smoothing. Unlike standard multi-hop aggregation, which applies uniform or predefined weight decay, HOA assigns personalized weightings, prioritizing closer neighbors to enhance noise calibration. This design prevents excessive noise propagation from distant nodes while still leveraging high-order information adaptively. These differences make HOA more effective in maintaining utility under noisy conditions. For more details, please refer to Section 3.2.2 on HOA and Section 4.4 on HOA ablation experiments in the paper. **Q5: How is the effectiveness of NFR in improving utility validated in the experiments?** **R5:** Thank you for your question. In Section 4.3, a detailed ablation study is conducted to validate the effectiveness of the Node Feature Regularizer (NFR) layer in improving utility. Specifically, the node classification accuracy of the piecewise mechanism (PM) and multi-bit mechanism (MBM) with and without the NFR layer is compared across different datasets and privacy budgets. As shown in Table 2 in the paper ($\star$ in the table represents the integration of NFR), incorporating NFR layer consistently enhances accuracy, with more significant improvements under smaller privacy budgets. For example, in the Cora dataset, MBM$^\star$ improves accuracy by 7% over MBM at $\epsilon=0.01$, while the improvement is 2.6% at $\epsilon=1.0$. This demonstrates that NFR effectively mitigates noise impact, especially under strong privacy constraints. Additional results are provided in Appendix D.5.
Summary: This study introduces the UPGNET framework, which aims to protect user privacy through Local Differential Privacy (LDP) while enhancing the learning utility of graph neural networks. It innovatively proposes the High-Order Aggregator and Node Feature Regularization layers to optimize feature dimensions and neighborhood size. Experimental results demonstrate that UPGNET outperforms existing methods on real-world graph datasets. This work holds significant implications for private machine learning, particularly in the context of privacy-preserving graph neural networks. Claims And Evidence: The claims made in the study are supported by clear and convincing evidence. The paper provides a solid theoretical foundation for the proposed methods, such as the analysis of key factors impacting the utility of privacy-preserving graph learning. Additionally, extensive experiments on benchmark datasets validate the effectiveness of the proposed framework UPGNET in terms of both privacy preservation and learning utility. The performance comparisons with baseline methods like LPGNN and Solitude further strengthen the claims. Methods And Evaluation Criteria: The proposed methods and evaluation criteria, including the benchmark datasets (Cora, CiteSeer, LastFM, Facebook), are well-suited to the problem at hand. Theoretical Claims: I checked the proofs in this paper, including unbiased estimation, key factor analysis, feature sparsification analysis in NFR, etc., and these proofs are correct. Experimental Designs Or Analyses: I checked the experimental sections of this paper, including the experimental setup, performance validation of UPGNET, ablation study of NFR and HOA, architecture comparison and empirical privacy attack defense experiments. The experimental validation of this study is adequate and detailed. Supplementary Material: I reviewed the appendix to this paper. Relation To Broader Scientific Literature: This paper addresses an important and timely issue in privacy preserving machine learning. The methods in the paper effectively enhance the utility of privacy graph learning. Essential References Not Discussed: There are no essential references that have been overlooked in this paper. Other Strengths And Weaknesses: Strengths 1. this paper addresses significant challenges in privacy-preserving gnn, with a special focus on utility loss due to local differential privacy perturbations. 2. this paper validates the effectiveness of UPGNET for utility enhancement and privacy preservation through detailed theoretical analyses and extensive experiments. 3. this paper is well organized and well written. Weaknesses 1. Adding more experiments under the GNN model, e.g. GAT. While the paper provides valuable insights into the GNN model's performance, adding experiments specifically focused on GAT would provide a more comprehensive comparison of different GNN architectures. 2. In the experiment section, the use of the ↑ symbols in Table 1 should be clearly defined. Providing an explanation of what the ↑ symbol represents would enhance the reader’s understanding of the data presented. 3. There are places in the article where necessary citations should be added, such as line 176 in the right-hand column on page 4, where citations should be added for proximal gradient descent (PGD). Other Comments Or Suggestions: Suggestion 1: Conducting additional experiments using different GNN models, such as GAT, would provide a more comprehensive evaluation of the framework's performance. Suggestion 2: Providing an explanation of what the ↑ symbol represents would enhance the reader’s understanding of the data presented. Suggestion 3: Add the necessary references. Questions For Authors: 1. Why is it required to select m dimensions for perturbation in d-dimensional node features under LDP? 2. Can the method proposed in the paper be applied to more GNN frameworks such as GAT? 3. Why use node classification accuracy as a metric? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments and suggestions, which have significantly contributed to improving the quality of our paper. Detailed responses to each comment are provided below. **Q1: Adding more experiments under the GNN model, e.g. GAT. While the paper provides valuable insights into the GNN model's performance, adding experiments specifically focused on GAT would provide a more comprehensive comparison of different GNN architectures.** **R1:** We appreciate the reviewer’s suggestion to include experiments with GAT to provide a more comprehensive comparison across different GNN architectures. These experiments have been conducted, and the results are presented in Appendix F.6. Specifically, Figure 6 in the original manuscript illustrates the performance of UPGNet compared to various baselines (BASE, Solitude, and LPGNN) under GAT with different privacy budgets $\epsilon \in$ {0.01, 0.1, 1.0, 2.0, 3.0}. The results demonstrate that UPGNet consistently outperforms these baselines across all privacy settings, further confirming its effectiveness in enhancing both utility and privacy preservation in graph learning. To improve clarity and ensure that readers do not overlook these results, the main text will be revised to explicitly highlight the presence of GAT-based experiments. **Q2: In the experiment section, the use of the ↑ symbols in Table 1 should be clearly defined. Providing an explanation of what the ↑ symbol represents would enhance the reader’s understanding of the data presented.** **R2:** Thank you for your valuable feedback. The ↑ symbols in Table 1 represent the utility improvement achieved by integrating the NFR layer, indicating the enhancement in performance when NFR is applied compared to when it is not. To ensure clarity for readers, an explanation of the ↑ symbol will be added to both the table caption and the surrounding text, making it clear that this symbol indicates the increase in utility after incorporating the NFR layer. **Q3: There are places in the article where necessary citations should be added, such as line 176 in the right-hand column on page 4, where citations should be added for proximal gradient descent (PGD).** **R3:** Thank you for the suggestion. The appropriate citation for proximal gradient descent (PGD) will be added in line 176 to ensure proper referencing. **Q4: Why is it required to select m dimensions for perturbation in d-dimensional node features under LDP?** **R4:** Thank you for your question. The selection of $m$ dimensions for perturbation in $d$-dimensional node features under LDP is a design choice aimed at controlling the trade-off between privacy and utility. By randomly selecting $m$ dimensions from the $d$ available dimensions, the mechanism focuses on perturbing only a subset of the features, thereby limiting the extent of noise injected. Each selected dimension is perturbed with a privacy budget of $\epsilon/m$, ensuring that the total privacy budget is allocated evenly across the perturbed dimensions. This strategy balances privacy preservation and the maintenance of data utility. **Q5: Can the method proposed in the paper be applied to more GNN frameworks such as GAT?** **R5:** Thank you for your question. As mentioned in Q2, experiments with GAT are presented in Appendix F.6 of the original manuscript. These results demonstrate that our approach is effective across different GNN architectures, confirming its generalizability and applicability to a variety of graph learning models. **Q6: Why use node classification accuracy as a metric?** **R6:** Thank you for your question. Node classification accuracy is used as a metric because it directly measures the effectiveness of graph-based models in learning meaningful representations of nodes, which is central to many graph learning tasks. This metric allows us to evaluate the trade-off between privacy preservation and utility. Additionally, node classification accuracy aligns with prior work (Sajadmanesh & Gatica-Perez, 2021; Lin et al., 2022) in the field, ensuring consistency and comparability of our results with established benchmarks. By using this metric, we maintain continuity with existing research while demonstrating the ability of our proposed methods.
Summary: This paper aims to enhance the utility of locally differential privacy graph learning. Its theoretical analysis derives two key factors affecting the estimation error, i.e., feature dimension and neighborhood size, and concludes that reducing the effective feature dimension and expanding the effective neighborhood size are conducive to enhancing the utility. Based on this conclusion, the paper proposes NFR and HOA layers to optimize the feature dimension and neighborhood scale. The generalization and effectiveness of UPGNet and its components are verified through theoretical analysis and experimental validation. Claims And Evidence: Yes, this paper confirms its claims through detailed theoretical analysis and experimental validation. Methods And Evaluation Criteria: - The proposed method effectively enhances the utility of privacy graph learning. - The baselines compared and the benchmark datasets employed are comprehensive and sensible. - The adopted evaluation metrics are reasonable and aligned with previous work. Theoretical Claims: Yes, I have checked the correctness of the five proofs of this paper. Experimental Designs Or Analyses: Yes, I have checked the soundness of the experimental design, validation and analysis of this paper. Supplementary Material: Yes, I have reviewed the appendix to this paper. I have focused my review on the additional experimental sections in the appendix that complement the experimental content of the main body of the paper. Relation To Broader Scientific Literature: With the increasing use of GNNs in privacy-sensitive domains such as social networking and bioinformatics, the issue of data privacy has become critical. This paper hopes to enhance the model utility while utilizing local differential privacy to achieve privacy preservation, making it more practical and promising for real-world applications in sensitive data environments. Essential References Not Discussed: No. Other Strengths And Weaknesses: # Strengths: - **Novel and important problem**. With the increasing use of GNNs in privacy-sensitive domains such as social networking and bioinformatics, the issue of data privacy has become critical. This paper hopes to enhance the model utility while utilizing local differential privacy to achieve privacy preservation, making it more practical and promising for real-world applications in sensitive data environments. - **Theoretical Innovations**. This paper derives two key factors affecting the estimation error, i.e. feature dimension and neighborhood size, through theoretical analysis. It is concluded that reducing the effective feature dimension and enlarging the effective neighborhood size are conducive to improving the utility. Then, UPGNet integrates two layers, HOA and NFR, expands the effective neighborhood range, and applies L1 regularization for feature sparsification, which reduces the estimation error in the LDP setting. These analyses and components represent important theoretical innovations. - **Sufficient experimental validation**. This paper conducts detailed experiments on multiple datasets to illustrate the performance of UPGNet in various privacy budgets. The results show that the performance of UPGNet is stable and accurate, proving the practicality and robustness of the proposed framework. # Weaknesses: - **Provide an analysis on the impact of graph density on the performance**. The difference in graph density—whether a graph is sparse or dense—can play a crucial role in the performance of models, especially in tasks involving graph neural networks. The difference in the density of different graphs, does this have an impact on the model performance in this paper and what is the impact? - Provides details of different GNN architectures. The paper has validated and compared model performance under different GNN architectures, including Graph Convolutional Networks (GCN), GraphSAGE, and Graph Attention Networks (GAT). However, it would be beneficial to offer more comprehensive implementation details for these architectures. This will help the reader to better understand how these models are applied and compared. - **Adjust charts for clarity**. Adjust the vertical coordinate spacing in the graphs to improve clarity, e.g., Figure 9. Other Comments Or Suggestions: - Provide an analysis on the impact of graph density on the performance. - Provides details of different GNN architectures. The model performance under different GNN architectures of GCN, GraphSAGE and GAT was validated and compared in the paper, and specific implementation details of the three different GNN architectures should be provided. - Adjust charts for clarity. Questions For Authors: - Can you provide details on different GNN architectures? - Could the charts be adjusted for better clarity? - How does the density of the datasets impact the performance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments and suggestions, which have significantly contributed to improving the quality of our paper. Detailed responses to each comment are provided below. **Q1: Provide an analysis on the impact of graph density on the performance. The difference in graph density—whether a graph is sparse or dense—can play a crucial role in the performance of models, especially in tasks involving graph neural networks. The difference in the density of different graphs, does this have an impact on the model performance in this paper and what is the impact?** **R1:** Thank you for the valuable suggestion. As detailed in Appendix F.6, our experiments indicate that graph density has a noticeable impact on the performance of the proposed method. The table below shows the average node degree (Avg. Deg.) of the four datasets used in the paper: | Dataset | Cora | CiteSeer | LastFM | Facebook | | ---- | ---- | ---- |---- |---- | |Avg. Deg.| 3.90 | 2.74 | 7.29| 15.21| Specifically, social network datasets like Facebook and LastFM, which exhibit higher average node degrees (higher graph density), show that accuracy levels off after just a few aggregation steps (e.g., K = 1). This is because the dense nature of these networks allows effective information aggregation early on, making additional steps less effective. In contrast, for sparser graphs like Cora and CiteSeer, the accuracy continues to improve with additional aggregation steps (up to K = 64), indicating that more steps are required to gather sufficient neighboring information for better node representation. These observations suggest that the performance of our method is influenced by the density of the graph, with denser graphs benefiting from fewer aggregation steps and sparser graphs requiring more steps for improved accuracy. For further details, please refer to Appendix F.6 in our paper. **Q2: Provides details of different GNN architectures. The paper has validated and compared model performance under different GNN architectures, including Graph Convolutional Networks (GCN), GraphSAGE, and Graph Attention Networks (GAT). However, it would be beneficial to offer more comprehensive implementation details for these architectures. This will help the reader to better understand how these models are applied and compared.** **R2:** Thank you for the valuable suggestion. Appendix F of the paper provides details on the configurations of three different GNN models. To clarify these configurations further, the following elaboration is provided: All GNN models (GCN, GraphSAGE, and GAT) consist of two graph convolutional layers, each with a hidden dimension of 16 and a SeLU activation function, followed by dropout layer. The GAT model employs four attention heads. Additionally, we have added implementation details of these three GNN models to enhance readers' understanding of their application and comparison, as follows: - **GCN** applies spectral graph convolution by propagating node features using the Laplacian matrix. The core update rule is: $H^{(l+1)} = \sigma(\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2} H^{(l)} W^{(l)})$, where $\tilde{A} = A + I$ is the adjacency matrix with self-loops, $\tilde{D}$ is its degree matrix, $W^{(l)}$ is the trainable weight matrix, and $\sigma$ is a activation function. - **GraphSAGE** samples a fixed number of neighbors and aggregates their features, making it scalable for large graphs. The general update rule is: $h_v^{(l+1)} = \sigma(W^{(l)} \cdot \text{AGGREGATE}(\{h_u^{(l)}, \forall u \in \mathcal{N}(v)\}))$, where $\text{AGGREGATE}(\cdot)$ represents the aggregation operation. - **GAT** employs self-attention to dynamically assign importance to neighbors. The update rule is: $h_v^{(l+1)} = \sigma\left(\sum_{u \in \mathcal{N}(v)} \alpha_{vu} W^{(l)} h_u^{(l)}\right)$, where the attention coefficient $\alpha_{vu}$ is computed as: $\alpha_{vu} = \frac{\exp(\text{LeakyReLU}(a^T [W^{(l)} h_v^{(l)} \| W^{(l)} h_u^{(l)}]))}{\sum_{k \in \mathcal{N}(v)} \exp(\text{LeakyReLU}(a^T [W^{(l)} h_v^{(l)} \| W^{(l)} h_k^{(l)}]))}$. Here, $a$ is a learnable vector, and $\|$ denotes concatenation. **Q3: Adjust charts for clarity. Adjust the vertical coordinate spacing in the graphs to improve clarity, e.g., Figure 9.** **R3:** Thank you for your suggestion. The vertical coordinate spacing in Figure 9 has been adjusted to enhance its clarity. Please refer to Figure 7 in the anonymous link (https://anonymous.4open.science/r/3814/1.pdf) for the updated version.
Summary: The paper introduces UPGNET, a utility-enhanced framework for locally differentially private (LDP) graph learning. It addresses privacy challenges in Graph Neural Networks (GNNs) by proposing a three-stage pipeline to generalize LDP protocols for node feature perturbation. Key contributions include identifying two critical factors influencing estimation error: feature dimension and neighborhood size . To mitigate these, UPGNET incorporates a Node Feature Regularization (NFR) layer using L1-regularization to reduce effective feature dimensions and a High-Order Aggregator (HOA) layer to expand effective neighborhood size, thereby minimizing estimation errors . The framework is compatible with existing LDP mechanisms (e.g., MBM, PM) and integrates with GNN architectures like GCN and GraphSAGE. Experiments on datasets (Cora, Citeseer, LastFM, Facebook) demonstrate UPGNET’s superiority over prior methods (e.g., LPGNN) in utility, achieving higher classification accuracy while reducing attribute inference attack success rates. Theoretical analysis validates the effectiveness of NFR and HOA in noise reduction and aggregation efficiency. The work advances privacy-preserving graph learning by balancing utility and LDP guarantees through novel architectural and optimization strategies. Claims And Evidence: The claims in the paper are **not fully supported by clear and convincing evidence** due to methodological limitations. Below are key issues: ### 1. **Insufficient Baseline Comparison** The paper claims UPGNET "excels over prior methods (e.g., LPGNN)" in utility. However, the comparison lacks **direct implementation details** of LPGNN (e.g., hyperparameters, optimization settings). Without replicating LPGNN’s setup, the superiority claim is **unverifiable**. This violates the requirement for "clear and convincing evidence" under standards like "much more likely than not" . ### 2. **Weak Empirical Validation of Privacy Defense** The attribute attack experiments (Fig. 5(b)) assume attackers have **full access to neighbors’ features**, which is unrealistic in decentralized LDP settings. The paper does not validate whether the attack model aligns with the threat model (§2.4), which assumes "untrusted servers" . Additionally, the reduction in attack accuracy (e.g., 50% to 30%) is not tied to **quantitative metrics** like *ϵ* or *δ* (Differential Privacy guarantees), weakening its rigor. ### 3. **Lack of Isolation in Key Factors** The theoretical claim that reducing **effective feature dimension** *d* and expanding **neighborhood size** *|N(v)|* minimizes estimation error (Thm. 2) is not empirically isolated. For instance: - **NFR Layer**: While L1-regularization reduces *d*, its impact is conflated with HOA’s effect. Ablation studies (e.g., testing NFR alone) are absent. - **HOA Layer**: The claim that HOA mitigates oversmoothing via Dirichlet energyis not compared to standard techniques like residual connections . ### 4. **Over-Reliance on Synthetic Metrics** The utility metric (classification accuracy) is not tied to **privacy-utility trade-off curves** (e.g., accuracy vs. *ϵ*). Without showing Pareto optimality, the claim of "superior utility" remains subjective. Methods And Evaluation Criteria: See comments above Theoretical Claims: Checked, and it looks sound. Experimental Designs Or Analyses: See comments at the end Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper demonstrates **originality** in its dual approach to addressing LDP-GNN utility challenges through feature regularization (NFR) and high-order aggregation (HOA), creatively combining classical L1 regularization with modern GNN architectures. This integration of theoretical insights (e.g., identifying feature dimension and neighborhood size as critical factors) with practical modular design (H-N/N-H architectures) offers a novel framework for balancing privacy and utility. The work also **significantly advances** the field by extending LDP protocols to diverse GNN models (GCN, GraphSAGE) and mechanisms (MBM, PM), providing a scalable solution for decentralized graph learning. The ablation studies and theoretical analysis (e.g., Thm. 2 linking HOA to noise reduction) strengthen its technical depth. However, **clarity** in validation is compromised by incomplete baseline comparisons (e.g., LPGNN’s hyperparameters not fully disclosed) and limited empirical scope (e.g., reliance on homophilic citation networks). While the threat model aligns with LDP principles, the attribute attack experiments assume unrealistic full neighbor access, weakening practical relevance. Additionally, the societal impact section is vague, limiting the paper’s real-world applicability. Despite these gaps, the framework’s modular design and theoretical grounding make it a promising step toward robust LDP-GNN systems, particularly for scenarios with strong homophily. Its focus on noise mitigation through architectural innovation sets a clear path for future work in heterogeneous graph settings. Other Comments Or Suggestions: - **Ambiguous abbreviation**: "LPGNN" is first mentioned in Sec. 4 but defined later (Appendix G); forward reference needed or add it to related work. - **Unclear phrase**: "low utility! GNN" and "high utility! GNN" in Figure 1’s caption use exclamation marks without context. Questions For Authors: 1. **Alternative Regularization Approaches**: The paper uses L1-regularization (NFR) to reduce effective feature dimensions. Why were other sparsity-inducing techniques (e.g., dropout, group Lasso) not explored? Could these alternatives offer better noise resistance or computational efficiency, and what challenges prevented their adoption? A convincing answer would clarify whether L1 is uniquely suited to LDP-GNNs or if limitations (e.g., sensitivity to hyperparameters) necessitated this choice. 2. **HOA vs. Nonlinear Aggregation**: The HOA addresses oversmoothing through Dirichlet energy and personalized weights. Given that GNNs often use nonlinear activations (ReLU, attention), why was a linear aggregation framework chosen for theoretical analysis (Thm. 1)? Would nonlinear HOA variants better align with real-world GNNs, and what barriers exist to integrating them? Clarification here could strengthen the practical relevance of the theoretical claims. 3. **Baseline Implementation Gaps**: The comparison with LPGNN lacks hyperparameter details. What specific challenges arose in replicating LPGNN’s setup, and how were they addressed? If implementation differences (e.g., optimizer choices) skewed results, this could undermine the claim of UPGNET’s superiority. 4. **Privacy Attack Realism**: The attribute attack assumes attackers have full access to neighbors’ perturbed features (Fig. 5(b)). Why was this unrealistic threat model chosen over scenarios with partial access or decentralized adversaries? Would UPGNET’s defense degrade in more plausible settings, and how does this affect its practical utility? 5. **Scalability to Heterophilic Graphs**: Experiments focus on homophilic datasets (Cora, Citeseer). What challenges arise when applying UPGNET to heterophilic graphs (e.g., Flickr, Reddit), where node features and neighbors disagree? Could HOA’s reliance on multi-hop aggregation amplify noise in such cases, and how would the framework adapt? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments and suggestions. Detailed responses are provided below. The newly added figures and tables can be found in the link ※: https://anonymous.4open.science/r/3814/1.pdf **Q1: Lack of LPGNN's implementation details**\ **R1:** The hyperparameters and optimization settings of LPGNN are detailed in Tables 1 and 2 of the link ※. Based on the settings in the original paper, we further carefully tune the hyperparameters and select optimal values. Additionally, our evaluation follows the standard practices (Sajadmanesh, et al., 2021) to minimize potential biases. **Q2: Why does the attack experiment assume full access to neighbors’ attributes?**\ **R2:** This assumption is designed to set an extreme-case scenario in which attackers are powerful by accessing neighbors' attributes. UPGNet exhibits excellent defensive capabilities under such highly adversarial conditions. In addition, supplementary experiments (shown in Fig. 3 of the link ※) are conducted to investigate the effect of attackers having varying proportions of accessed neighbor information, in which UPGNet consistently shows robust defense performance. **Q3: Does the attack experiment align with the threat model?**\ **R3:** The attack experiment aligns with the threat model outlined in §2.4. Specifically, the attack experiment explicitly targets scenarios in which untrusted servers attempt to infer private attributes, which aligns with the focus of the threat model. Built on this premise, the attack provides a rigorous assessment of the model's defensive capability in adversarial settings. **Q4: Lack of ablation study on NFR and HOA layers**\ **R4:** For the NFR layer, as stated in Tables 1 and 3 of the original manuscript, the ablation study (e.g., testing NFR alone) evaluates its effectiveness in noise calibration. For the HOA layer, Fig. 2 of the link ※ presents an additional experiment demonstrating its superiority in private graph learning compared to residual connections (RC). RC primarily addresses vanishing gradients but remains ineffective in suppressing LDP noise. In contrast, HOA mitigates oversmoothing by preserving Dirichlet energy, thereby effectively calibrating noise. **Q5: Lack of privacy-utility trade-off curves**\ **R5:** Fig. 1 in the link ※ presents privacy-utility trade-off curves (e.g., accuracy vs. ϵ), illustrating that UPFNet consistently outperforms other baselines across a range of privacy levels. The results highlight a clear utility advantage while maintaining strict privacy guarantees, demonstrating that UPFNet effectively navigates the trade-off between privacy and performance. Furthermore, the observed trend aligns with the principles of Pareto optimality, reinforcing that UPFNet effectively balances privacy preservation and model performance in private graph learning. **Q6: Ambiguous abbreviation \& unclear phrase**\ **R6:** Thank you for pointing out the ambiguous abbreviation "LPGNN". It is first introduced in §4.1 with proper citation and further defined in the Related Work (§5) and App. G for clarity. The unclear phrases in Fig. 1 have been revised; see Fig. 6 in the link ※ for the updated version. **Q7: NFR vs. other regularization approaches**\ **R7:** As stated in Thm. 2, a key aspect of noise calibration in LDP-GNN lies in reducing the effective feature dimensions. NFR employs L1 regularization, which directly facilitates feature selection and enhances fine-grained noise calibration. In contrast, Dropout and Group Lasso fail to achieve precise feature selection tailored for noise calibration. Dropout introduces randomness by stochastically deactivating neurons, leading to instability in feature selection, while Group Lasso enforces sparsity at a predefined group level, requiring carefully designed group definitions that may not align with optimal noise calibration. These limitations reduce their effectiveness in mitigating noise impact. Empirical results (Table 4 in the link ※) further validate that NFR outperforms Dropout and Group Lasso in preserving learning utility. **Q8: HOA vs. nonlinear aggregation**\ **R8:** HOA layer is applied prior to the GNN, ensuring compatibility with GNN's nonlinear activations (ReLU, attention). The HOA layer adopts linear aggregation because nonlinear aggregation may distort noise distribution, introducing bias and amplifying estimation errors under LDP constraints. In contrast, its linear aggregation preserves LDP unbiasedness (Thm. 1), effectively mitigating estimation errors and maximizing denoising effectiveness. **Q9: Scalability on heterophilic graphs**\ **R9:** When applied to heterophilic graphs (e.g., Flickr and Reddit), UPGNet continues to demonstrate superior utility compared to other baselines, as shown in Fig. 4 of the link ※. The HOA layer, by preserving Dirichlet energy and enabling personalized aggregation, effectively calibrates noise in heterophilic graphs, as evidenced by the experiments in Fig. 5 of the link ※.
null
null
null
null
null
null
Scaffold with Stochastic Gradients: New Analysis with Linear Speed-Up
Accept (poster)
Summary: This paper presents a novel analysis of the SCAFFOLD algorithm, a popular method in federated learning designed to address client heterogeneity. The authors show that the global parameters and control variates of SCAFFOLD form a Markov chain that converges to a stationary distribution, which allows them to establish that SCAFFOLD achieves linear speed-up with respect to the number of clients, up to higher-order terms in the step size. The analysis reveals that SCAFFOLD retains a small higher-order bias due to stochastic updates. The paper derives new non-asymptotic convergence rates and highlights that SCAFFOLD’s global parameters’ variance decreases inversely with the number of clients, enabling scalable and efficient federated learning. Additionally, the authors provide precise characterizations of the algorithm’s variance and bias in the stationary regime. Claims And Evidence: The claims made in this submission are generally supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria generally make sense. Theoretical Claims: I do not carefully check the correctness of any proofs. Experimental Designs Or Analyses: In the experimental setup, using the same step size $\gamma$ for both SCAFFOLD and FedAvg without tuning them individually does not seem very reasonable. Supplementary Material: No. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper provides a novel Markov chain-based analysis of SCAFFOLD under stochastic gradient updates, which has not been explored in prior work. This originality strengthens the theoretical understanding of federated learning in stochastic settings. 2. The result showing linear speed-up for SCAFFOLD in stochastic settings without relying on restrictive assumptions (e.g., global step sizes or quadratic objectives) is a significant theoretical contribution. 3. The identification and quantification of higher-order bias caused by stochastic updates in SCAFFOLD is a new insight that extends the understanding of bias correction mechanisms in federated learning. Weaknesses: 1. A key significance of the original SCAFFOLD by Karimireddy et al. (2020) is its independence from the heterogeneity of data distributions. However, the convergence rate of the one-learning-rate SCAFFOLD in this paper depends on the level of data heterogeneity $\zeta_1$ (Assumption 4 and Theorem 4.8). 2. The analytical framework is somewhat restricted, as it only applies to strongly convex objectives and assumes full client participation. 3. While it is understandable that this paper is primarily theoretical, the experiments appear relatively simple. Moreover, certain aspects of the experimental setup, such as step size tuning, are not well conducted (see comments in Experimental Designs Or Analyses). Other Comments Or Suggestions: N/A Questions For Authors: 1. Can SCAFFOLD, without a global learning rate, handle partial client participation effectively? 2. Do the theoretical results in this paper imply linear speedup for FedAvg when using only a local learning rate? 3. Does the algorithmic framework strongly rely on convexity assumptions? Can it be extended to non-convex settings? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the positive evaluation of our paper! We are happy that you found that the Markov chain-based analysis of Scaffold is original, and that "this originality strengthens the theoretical understanding of federated learning in stochastic settings," that proving that Scaffold has linear speed-up without global step-size is a "significant theoretical contribution," and that our analysis "extends the understanding of bias correction mechanisms in federated learning". We address your concerns below. **"A key significance of the original SCAFFOLD by Karimireddy et al. (2020) is its independence from the heterogeneity of data distributions. However, the convergence rate of the one-learning-rate SCAFFOLD in this paper depends on the level of data heterogeneity (Assumption 4 and Theorem 4.8)."** Thank you for allowing us to clarify this point. This is also the case in the original Scaffold analysis (see Remark 10 in [1]), and in analyses of related algorithms like Scaffnew [2]. Similarly to [1, 2], heterogeneity appears in our results with a multiplicative factor $(1 - \gamma \mu/4)^{HT}$, and thus does not prevent convergence. It only has a small impact on the convergence rate, as it only affects logarithmic terms in the algorithm's complexity (as in [1, 2]). We insist that this measure of heterogeneity is not an additional assumption, as Assumption 4 always holds as long as a unique global optimum exists. We will replace this assumption and state it as a consequence of strong convexity, which implies that a unique global minimum exists. [1] Karimireddy et al. (2020). Scaffold: Stochastic controlled averaging for federated learning. [2] Mishchenko et al. (2022). Proxskip: Yes! local gradient steps provably lead to communication acceleration! finally!. **"The analytical framework is somewhat restricted, as it only applies to strongly convex objectives."** Analyzing strongly convex and smooth objective functions is a common approach in federated learning, as it allows for rigorous theoretical guarantees. While our focus is on this setting, we believe our methodology could serve as a foundation for extending convergence analysis to weaker assumptions, making this a promising direction for future research. **"While it is understandable that this paper is primarily theoretical, the experiments appear relatively simple. Moreover, certain aspects of the experimental setup, such as step size tuning, are not well conducted (see comments in Experimental Designs Or Analyses)."** As noted by the reviewer, our experiments primarily serve an illustrative purpose. We selected a representative step size and number of local iterations to demonstrate the key theoretical insights in practical settings. If the reviewer thinks this limitation is crucial for acceptance of our paper, we are happy to provide more results with more thorough hyperparameter tuning in the final version of the paper. We now answer your questions: **"1. Can SCAFFOLD, without a global learning rate, handle partial client participation effectively?"** Thank you for this insightful question. We believe that results similar to ours would still hold with partial participation and that the analytical framework we propose can be extended to this setting. Nonetheless, we refrained from including it in the paper, as we believe that it would divert attention from the core of this paper's study, which is the linear speed-up phenomenon. "**2. Do the theoretical results in this paper imply linear speedup for FedAvg when using only a local learning rate?"** Yes, indirectly. The results on the stationary distribution $\pi^{(\gamma, H)}$ (that is, Lemma 4.6, Theorem 4.7, and the variance part of Theorem 4.8) would still hold when setting $\xi_c = 0$ throughout the analysis. This substitution would directly imply that FedAvg exhibits linear speed-up whenever SCAFFOLD does. **"3. Does the algorithmic framework strongly rely on convexity assumptions? Can it be extended to non-convex settings?"** This is an exciting direction for future research. While we expect that viewing SCAFFOLD as a Markov chain and establishing convergence to a stationary distribution could be established without strong convexity, doing so would require fundamentally different analytical tools. Assuming that, without strong convexity, a stationary distribution would still exist, our current analysis would not allow to study it as is. Building an analysis in this setting (e.g., following ideas from [1]) is thus a promising direction for future research. [1] Yu et al. (2021). An analysis of constant step size SGD in the non-convex regime: Asymptotic normality and bias. --- Rebuttal Comment 1.1: Comment: Thank the authors for the response. Could authors further explain why the original Scaffold analysis requires the heterogeneity assumption? I did not find the discussion in your paper. Additionally, I remain unconvinced by using the same step sizes for the tested algorithms, as it seems hard to justify what is a *representative* step size. Nevertheless, I appreciate the the theoretical contributions of this paper. --- Reply to Comment 1.1.1: Comment: Thank you for allowing us to clarify further. In the original Scaffold paper, like in our results, the convergence rate for the strongly-convex case (first case in their Theorem VII) an optimization error that decreases exponentially. Denoting $\mu$ the strong convexity constant, $L$ the smoothness constant, and $T$ the number of term that decreases exponentially scales in $ \widetilde{D} \exp( - (\mu / L ) T ) $, where, using the notations from our manuscript, $$\widetilde{D} = \|\| \theta^0 - \theta^\star \|\|^2 + \frac{1}{N L^2} \sum_{c=1}^N \|\| \xi_c^0 - \nabla f_c(\theta^\star) \|\|^2 $$ Taking $\xi_c^0 = 0$, the second term becomes $$\frac{1}{N L^2} \sum_{c=1}^N \|\| \xi_c^0 - \nabla f_c(\theta^\star) \|\|^2 = \frac{1}{N L^2} \sum_{c=1}^N \|\| \nabla f_c(\theta^\star) \|\|^2 = \frac{1}{L^2} \zeta_1^2$$ which is exactly the term that we have in Theorem 4.8 when taking maximal step size. Again, we stress that the way we measure heterogeneity is not an assumption, but rather a **definition** of $\zeta_1^2 = \frac{1}{N} \sum_{c=1}^N \|\| \nabla f_c(\theta^\star) \|\|^2$. This quantity $\zeta_1$ is always defined, as long as the optimal point $\theta^\star$ is defined, which is always the case for strongly convex functions. Regarding experiments, the goal was to show that in a two numerical examples, Scaffold has linear speed-up, while FedAvg does not in the case where there is heterogeneity, as it suffers from heterogeneity bias. If you have specific suggestions as to how we could improve these experiments, we will gladly provide more numerical results and add them in the final version of the manuscript. Thank you again for your thorough review and your valuable insights.
Summary: The paper proposes an analysis for the Scaffold algorithm, a popular method for dealing with data heterogeneity in federated learning. The authors first show that the global parameters and control variates define a Markov chain that converges to a stationary distribution in the Wasserstein distance. Leveraging this result, they prove that Scaffold achieves linear speed-up in the number of clients up to higher-order terms in the step size. The analysis also reveals that Scaffold retains a higher-order bias, similar to FedAvg, that does not decrease as the number of clients increases. Claims And Evidence: I think this is a great paper. I agree with the contributions and claims. Viewing scaffold through the lenses of MC is a great idea. Methods And Evaluation Criteria: In my opinion the computation part is superfluous. It essentially confirms the theoretical results. It better should or otherwise there are flaws in the math. They also make a claim that it's better than FedAvg. This has already been established numerically in many other papers. The value of this paper is in theory and thus Section 6 should be removed since it doesn't add anything interesting. Theoretical Claims: No, I did not. I checked the math in the main body, including all of the statements. Experimental Designs Or Analyses: This section is not needed (see my previous comments). Supplementary Material: I did not check the proofs. Relation To Broader Scientific Literature: See my comments above (MC and scaffold). Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: 1. What is the purpose of Section 6? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough and constructive feedback. We greatly appreciate your recognition of our paper’s contribution and the innovative perspective provided by viewing Scaffold using Markov chain formalism. Below, we carefully address each of your remarks: **Claims and Evidence:** We appreciate your positive acknowledgment of our contributions and claims, particularly regarding the theoretical framing of Scaffold using Markov chain analysis. **Methods and Evaluation Criteria:** Thank you for your insightful comment on the experimental section. Our computational experiments primarily validate our theoretical findings rather than establish novel empirical superiority over existing methods like FedAvg, whose empirical performance is already well-documented in the literature. However, we respectfully suggest retaining Section 6, at least in a condensed form. Indeed, numerical validations, even when theoretically expected, offer essential practical insights into the assumptions and conditions underpinning our theoretical results. We will explicitly clarify the purpose of these experiments in the revised manuscript to underscore their complementary role. Considering your feedback, we will condense the experimental section, and stress its role as an illustration of our theoretical results. This adjustment aligns with the theoretical focus of our paper while providing an experimental component for practical reference. --- Rebuttal Comment 1.1: Comment: Thanks for providing the answers. I have no further questions and comments.
Summary: This paper investigates the convergence properties of Scaffold, a federated learning method designed to reduce variance among clients. By analyzing the global iterates and control variates from the perspective of a Markov chain, the study establishes a novel non-asymptotic convergence rate for Scaffold with respect to the total number of clients. Claims And Evidence: I think the claims in the submission are mostly clear. The paper well illustrates the key contributions as stated in the introduction with sufficient evidence. However, regarding Remark 2.1, I find the claim that "While this yields the desired linear speed-up by... and increasing the global one, it essentially reduces the algorithm to mini-batch SGD" somewhat confusing. To my understanding, this is not entirely accurate because federated algorithms involve locally iterative training. Reducing the local learning rate and performing multiple steps of local training does not equate to simply increasing the batch size in mini-batch SGD. The iterative nature of local updates in federated learning introduces additional dynamics and are not equal to standard mini-batch SGD. Methods And Evaluation Criteria: The main focus of this paper is on studying the convergence properties of existing methods. Consequently, it does not introduce any significant new methods or define evaluation criteria for comparative analysis. Theoretical Claims: I have reviewed all the theoretical results in the main paper and briefly examined the theoretical results in the appendix. I have several questions regarding this: 1. I believe the theoretical results in this paper rely on some strong assumptions, such as the bounded third and fourth derivatives. While I do not think these assumptions necessarily make the paper less convincing, it is true that achieving better convergence results heavily depends on them. If these assumptions are also commonly applied in other convergence analysis works related to Markov chains or other perspectives, I would suggest the authors highlight their necessity and reference existing related studies to make it clear. 2. The improved convergence rate with linear speedup in terms of the number of clients $N$ seems to rely on specific conditions, as highlighted in Corollary 4.9. These conditions include $\gamma \lesssim ...$, $H \lesssim ...$ and $N \lesssim...$, this implies that the better rate seems achievable only when the number of local steps $H$ and the number of total clients $N$ are constrained within certain bounds. I think such conditions about $H$ and $N$ appear to be less prevalent in related works, such as Mangold et al. (2024b) and Hu & Huang (2023) referenced in your paper. 3. Moreover, one of the key motivation behind Scaffold is to address the issue of client drift, which occurs when a large number of local training steps cause clients with heterogeneous data distributions to converge toward their local optima rather than the global optimum. However, in this paper, the condition $H \lesssim ...$ (which, to my understanding, is not required in the original Scaffold paper), somewhat contradicts this motivation. Experimental Designs Or Analyses: 1. The authors discuss the non-vanishing bias of Scaffold in Section 5 and claim that Scaffold can eliminate heterogeneity bias. However, they do not provide further analysis or discussion on heterogeneity bias in the experimental section. It would be beneficial to include empirical results or insights that demonstrate how Scaffold addresses heterogeneity bias in practice. 2. The authors claim that Scaffold achieves linear speedup based on the results in Figure 1. However, this claim is not particularly convincing to me, as linear speedup with respect to the number of clients is not unique to Scaffold—FedAvg also exhibits this property. Therefore, the linear speedup of Scaffold does not stand out as a novel or exciting result. Supplementary Material: I briefly reviewed the theoretical results in the appendix. Relation To Broader Scientific Literature: This paper may contribute to the privacy-preserving related machine learning applications. Essential References Not Discussed: There is a (recent) paper that also explores FL using a Markov chain framework: Sun, Z., Zhang, Z., Xu, Z., Joshi, G., Sharma, P., & Wei, E. (2024). Debiasing Federated Learning with Correlated Client Participation. Other Strengths And Weaknesses: Strengths: The paper is well written, and the theoretical part is mostly clear to follow. Most Weaknesses have been discussed in previous sections, particularly about the theoretical claims, the claims of the paper and the empirical analysis. Other Comments Or Suggestions: The legends in Figure 1 are missing. Questions For Authors: Please refer to previous sections. Addressing the issues in previous part would help me better understand and potentially change my evaluation of the paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review of our paper and for your insightful comments. We are happy that you found our paper "well written, and the theoretical part is mostly clear to follow." **"I believe the theoretical results in this paper rely on some strong assumptions, such as the bounded third and fourth derivatives. [...] If these assumptions are also commonly applied in other convergence analysis works related to Markov chains or other perspectives, I would suggest the authors highlight their necessity and reference existing related studies to make it clear."** Thank you for allowing us to clarify. These assumptions are not required to establish the *convergence* results (namely Theorem 4.2 and 4.4), which hold under standard assumptions ($\mu$ strong convexity and $L$-smoothness). They are necessary to bound higher-order terms in subsequent results (e.g., Theorems 4.7 and 4.8): this is needed to derive our refined results. Higher-order derivative assumptions are standard in the literature, they are employed in the analysis of SGD in [1,2,3], using similar ideas of viewing SGD as a Markov chain. [1] Dieuleveut et al. (2020). Bridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains. [2] Allmeier et al. (2024). Computing the bias of constant-step stochastic approximation with markovian noise. [3] Sheshukova et al. (2024). Nonasymptotic analysis of stochastic gradient descent with the richardson-romberg extrapolation. **"The improved convergence rate with linear speedup in terms of the number of clients seems to rely on specific conditions, as highlighted in Corollary 4.9 ($\gamma \lesssim ...$, $H \lesssim ...$, and $N \lesssim ...$). [...] I think such conditions about and appear to be less prevalent in related works, such as Mangold et al. (2024b) and Hu & Huang (2023)."** Corollary 4.9 provides the sample and communication complexity of SCAFFOLD. The bound on \\(H\\) arises from the necessity to simultaneously control errors in both the parameters and the control variates, explaining the discrepancy between our analysis and that of Mangold et al. (2024b). The bound on \\(\\gamma\\) is standard in the stochastic optimization literature, typically required for bounding the error of the final iterate. Finally, the bound on \\(N\\) emerges from the need to manage higher-order terms: convergence still holds for values of \\(N\\) exceeding this bound, albeit without linear speed-up. If the reviewer believes it would clarify our main message, we can provide an alternative formulation of this lemma without imposing this condition. **"Moreover, one of the key motivation behind Scaffold is to address ...]. However, in this paper, the condition [..] contradicts this motivation."** A similar condition appears in the main theorem (Theorem VII) of Karimireddy et al. (2020), which requires \\(\eta_g \\gamma H L \le 1\\), with \\(\\eta_g\\) denoting the global step size. Since Karimireddy et al. mandate the global step size to satisfy \\(\\eta_g \\ge 1\\), their result implicitly enforces the condition \\(\\gamma H L \\le 1\\), matching the assumption in our paper. To our knowledge, no existing analysis of SCAFFOLD (with a deterministic communication scheme) avoids this assumption. If the reviewer is aware of any such analysis, we would gladly include it in our discussion. **"1. The authors discuss the non-vanishing bias of Scaffold in Section 5 and claim that Scaffold can eliminate heterogeneity bias...."** Our analysis of SCAFFOLD's bias facilitates a direct comparison with that of FedAvg, as characterized, for instance, by Mangold et al. (2024b). Specifically, we demonstrate that the bias can be decomposed into heterogeneity and stochasticity components. Regarding the empirical validation of this finding, we note that related experimental evidence has already been provided, for example, by Karimireddy et al. (2020). Consequently, our own experiments primarily focus on investigating the linear speed-up phenomenon. **"2. The authors claim that Scaffold achieves linear speedup based on the results in Figure 1. [...]."** We respectfully maintain that our result does represent a notable advancement: demonstrating that SCAFFOLD achieves linear speed-up has been an open problem in the literature for several years. Although linear speed-up is well-established for FedAvg, extending this result to SCAFFOLD is considerably more challenging due to noise propagation through the control variates into all parameters. To establish linear speed-up in this context, one must carefully track covariances among all pairs of control variates and demonstrate that these covariances scale as \\(1/N\\), a task that is technically intricate and previously unresolved. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the rebuttal. Most of my concerns have been addressed. Sorry for my late reply, as I originally replied to the “Official Comment” button. While I still find the condition of $N \lesssim...$ a little bit unclear (I understand the necessity of $\gamma \lesssim ...$, $H \lesssim ...$ as many papers need such condition). Could the authors please elaborate further on how “the bound on $N$ emerges from the need to manage higher-order terms”? I would be glad to raise my score if this concern is clarified. --- Reply to Comment 1.1.1: Comment: Thank you very much for the follow-up. Regarding the condition on $N$, it can be removed, which results in the following modification of Corollary 4.9. In this version of the Corollary, we do not impose any condition on $N$. Instead, the number of gradients computed by each client scales in $\tfrac{\sigma_\star^2 }{\mu^2 \epsilon^2} \max(\tfrac{1}{N}, \tfrac{Q^{2/3} \epsilon^{2/3}}{\mu}, \tfrac{Q \epsilon}{L^{1/2}\epsilon^{1/2}})$ (up to logarithmic terms), which decreases in $N$ while $\max(\tfrac{Q^{2/3} \epsilon^{2/3}}{\mu}, \tfrac{Q \epsilon}{L^{1/2}\epsilon^{1/2}}) \le 1/N$. We will replace the current Corollary 4.9 by this updated version. ------ What we meant by "the bound on N emerges from the need to manage higher-order terms" is that linear speed-up holds until this given threshold on $N$ (but this does not prevent convergence of the algorithm). This is due to the need to bound the terms in $\gamma$, $\gamma^{3/2}$ and $\gamma^3$ in Theorem 4.8's result by $\epsilon^2$, which requires that $\gamma \lesssim{} \min(\frac{N \mu \epsilon^2}{\sigma_\star^2}, \frac{\epsilon^{4/3} \mu^{5/3}}{Q^{2/3} \sigma_\star^2}, \frac{{L}^{1/2} \mu^{3/2} \epsilon}{ Q \sigma_\star^2})$. Thank you again for your insightful remarks, which will greatly help to improve the clarity of our paper. ----- **Corollary 4.9.** Let $\epsilon > 0$. Under Theorem 4.8's assumptions, we can set $\gamma \lesssim{} \min(\frac{N \mu \epsilon^2}{\sigma_\star^2}, \frac{\epsilon^{4/3} \mu^{5/3}}{Q^{2/3} \sigma_\star^2}, \frac{{L}^{1/2} \mu^{3/2} \epsilon}{ Q \sigma_\star^2})$ and $H \lesssim \frac{\sigma_\star^2 \min( 1, {\mu}/{\zeta})}{\mu L \epsilon^2} \max(\frac{1}{N}, \frac{Q^{2/3} \epsilon^{2/3}}{\mu}, \frac{Q \epsilon}{L^{1/2}\epsilon^{1/2}})$. Then, Scaffold guarantees $\mathbb{E}[ \|\| \theta^{T} - \theta^\star \|\|^2] \le \epsilon^2$ for $T \gtrsim \frac{L}{\mu} \max(1, \zeta/\mu) \log\left( \frac{\|\| \theta^{0} - \theta^\star \|\|^2 + \zeta^2 / L^2}{\epsilon^2} \right)$, and the number of stochastic gradients computed by each client is $$\\# \text{grad per client} \lesssim \tfrac{\sigma_\star^2 }{\mu^2 \epsilon^2} \max(\tfrac{1}{N}, \tfrac{Q^{2/3} \epsilon^{2/3}}{\mu}, \tfrac{Q \epsilon}{L^{1/2}\epsilon^{1/2}}) \log\left( \tfrac{\|\| \theta^{0} - \theta^\star \|\|^2 + \zeta^2 / L^2}{\epsilon^2} \right) \enspace.$$ **Proof.** The condition on $\gamma$ stems from the conditions $\frac{\gamma}{N \mu} \sigma_\star^2 \le \epsilon^2$ and $\frac{\gamma^{3/2} Q}{\mu^{5/2}} \sigma_\star^3 \le \epsilon^2$, as well as $\frac{\gamma^3 H Q^2}{\mu^3} \sigma_\star^4 \le \frac{\gamma^2 Q^2}{\mu^3 L} \sigma_\star^4 \le \epsilon^2$. Then the condition on $H$ comes from $\gamma H L \lesssim 1$ and $\gamma H L \zeta_2 \lesssim \mu$. Finally, setting $T$ such that $(1 - \gamma \mu/4)^{HT} \log(\|\|\theta^0 - \theta^\star \|\|^2 + \gamma^2 H^2 \zeta_1^2 ) \le \epsilon^2$ gives the communication complexity. The sample complexity comes from the fact that each client computes $TH$ gradients.
Summary: This paper studies the convergence of the SCAFFOLD algorithm under the assumptions of (a) strong convexity, (b) smoothness, (c) first-order similarity (i.e. the average norm of the difference between the gradients on each client and the avg function is bounded), (d) second-order similarity (like the former, but for Hessians), and (e) second-order smoothness (so the Hessian is Lipschitz). The original analysis of SCAFFOLD does exhibit a linear speedup, but only when using a global stepsize (i.e. not straightforward aggregation). This paper gives convergence guarantees showing that SCAFFOLD does exhibit a linear speedup in its main terms, albeit at the cost of the additional assumptions (d) and (e). The paper also derives approximate expressions for the covariates of SCAFFOLD at convergence. After reviewing the authors' response clarifying their focus on studying SCAFFOLD with global stepsize η=1 (which aligns with standard usage in implementations), I maintain my accept recommendation. Claims And Evidence: The paper presents novel theory that provably shows SCAFFOLD does enjoy linear speedup in the number of nodes, albeit at the cost of added assumptions (in particular, the Lipschitz Hessian assumption and the second-order similarity ones). There are additional higher-order terms that do not enjoy a linear speedup, but this is common to all other analysis of both SCAFFOLD and FedAvg. 1. "In this paper, we aim to study the SCAFFOLD algorithm as it is commonly used. Thus, contrarily to (Karimireddy et al., 2020; Yang et al., 2021), we do not consider two-sided step sizes. While this yields the desired linear speed-up by dividing the local step size by √N , and increasing the global one, it essentially reduces the algorithm to mini-batch SGD, and does not give much insights on SCAFFOLD itself. Thus, we consider in Table 1 the rate of SCAFFOLD without global step size" One of the key flexibilities of using two stepsizes is to be able to interpolate between minibatch SGD and local SGD, as argued in [1]-- I don't see why removing this is any good idea. I also find it a bit strange to say "does not give much insights on SCAFFOLD itself" when the SCAFFOLD paper uses a global stepsize as an integral part of the algorithm! "in practical implementations, there is no global step size" This does not seem to be true. Looking at GitHub repos implementing SCAFFOLD, many use a global learning rate [e.g. 2, 3]; Moreover, [4] argued that server stepsizes play a crucial role in federated learning algorithms. Nevertheless, I think understanding the algorithm with global stepsize $1$ is still useful. 2. The fact that SCAFFOLD still suffers from bias due to stochastic updates is not very surprising. After all, there is no correction term applied on the basis of which stochastic gradient we are using-- the correction terms are indexed by the clients. [1] Woodworth, Patel & Srebro (2020) Minibatch vs Local SGD for Heterogeneous Distributed Learning. [2] https://github.com/KarhouTam/SCAFFOLD-PyTorch/blob/master/src/server/scaffold.py [3] https://github.com/BalajiAI/Federated-Learning/blob/main/src/algorithms/SCAFFOLD/server.py [4] Charles & Konečný (2020) On the Outsized Importance of Learning Rates in Local Update Methods. Methods And Evaluation Criteria: This is a theoretical paper, and the included examples are toy to demonstrate the theory. However, I think the paper would benefit significantly from including some experiments on neural networks to see if we can expect the same linear speedup to hold in the nonconvex setting. Theoretical Claims: I have checked some proofs in section B, particularly the proofs of Lemma B.1. to Theorem B.4. I did not check the other proofs. Experimental Designs Or Analyses: N/A. Supplementary Material: I read sections A and B in the supplementary. Relation To Broader Scientific Literature: There is plenty of literature on variance-reduced methods like SCAFFOLD for reducing the heterogeneity bias in federated learning; This paper advances the analysis of SCAFFOLD and the methods here can potentially be used for other algorithms as well. The analysis in the present paper borrows a lot from [5], but I think it's sufficiently different that it is still a significant contribution. [5] Mangold, Paul, et al. "Refined Analysis of Federated Averaging's Bias and Federated Richardson-Romberg Extrapolation." arXiv preprint arXiv:2412.01389 (2024). Essential References Not Discussed: N/A. Other Strengths And Weaknesses: 1. (Strength) The Markov chain point-of-view is not very common in the federated learning literature, and using it here turns out to be very useful and insightful. I think this connection can be helpful to the community going forward. 2. (Strength) The two-tiered approach of first relying on a convergence guarantee with no linear speedup, then one with linear speedup, is also quite nice. 3. (Strength) The approach of studying the convergence of the algorithm by essentially studying its algorithmic stability (Lemma B.1) is also very creatively applied here. It's rare to see this in the literature. 4. (Weakness) Virtually all of the results require using local stepsizes that scale with $1/H$, this is a tiny stepsize and does not reflect practice. It also means we can not recover results in the i.i.d. regime, which require large local stepsizes. Other Comments Or Suggestions: Line 1094: "Bouding" should be Bounding. Questions For Authors: 1. Given that you do assume second-order similarity, the rate you obtain for SCAFFOLD is $\frac{L}{\mu} \frac{\zeta_2}{\mu}$ when $\zeta_2 > \mu$-- given that $\zeta_2 \leq L$ necessarily, this rate is always larger than $\frac{\zeta_2^2}{\mu^2}$, which is the rate obtained by several algorithms in the second-order similarity setting (e.g. [5]). This adds yet another dimension to the comparison: sometimes, $\zeta_2^2/\mu^2$ can be even smaller than $\sqrt{L/\mu}$. I am not 100% sure if any of the papers on the second-order similarity assumption show a linear speedup; Do you think it's possible to get a rate like $\frac{\zeta_2^2}{\mu^2}$ here? 2. Can the Performance Estimation Problem [6] be used to derive an algorithm-specific lower bound for SCAFFOLD that can shed light on what the tightest rate we can expect to be? [5] Khaled & Jin (2022) Faster Federated Optimization Under Second-Order Similarity. [6] https://francisbach.com/computer-aided-analyses/ Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the positive evaluation of our paper! We appreciate that you found the Markov chain point of view "very useful and insightful", and "helpful to the community going forward", and that you found our theory based on algorithmic stability "very creatively applied here" and "rare to see in the literature." We address your concerns below. **" “in practical implementations, there is no global step size” This does not seem to be true. Looking at GitHub repos implementing SCAFFOLD, many use a global learning rate [e.g. 2, 3]"** Thank you for allowing us to clarify this point. Our statement refers to the experimental section in the original SCAFFOLD paper, which states: "We always use global step-size $\eta_g = 1$". While it is true that implementations of this algorithm allow for a global step size different from one, the default setting in all experiments we found is $\eta_g = 1$. From the references you cite, [2] consistently uses a global step size $\eta_g = 1$ in its experiments, and [3] only provides an implementation of the algorithm without using it in specific tasks. As an additional example, the FLamby dataset's benchmarks [FLamby] also sets the global step size to one across various optimization tasks. We will add these references to the statement in the paper to make this as clear as possible. [2] https://github.com/KarhouTam/SCAFFOLD-PyTorch/blob/master/src/server/scaffold.py [3] https://github.com/BalajiAI/Federated-Learning/blob/main/src/algorithms/SCAFFOLD/server.py [FLamby] https://github.com/owkin/FLamby/tree/main/flamby **"One of the key flexibilities of using two stepsizes is to be able to interpolate between minibatch SGD and local SGD, as argued in [1]-- I don’t see why removing this is any good idea."** **"Moreover, [4] argued that server stepsizes play a crucial role in federated learning algorithms. Nevertheless, I think understanding the algorithm with global stepsize $1$ is still useful."** We acknowledge this point and agree that server step sizes can play an important role. However, as stated above, using a server step size departs from standard usage of Scaffold, motivating our choice of studying the algorithm with the global step size set to one. Precisely studying the impact of the choice of global/local step sizes on Scaffold (in terms of convergence rate, bias, and linear speed-up) could be an interesting extension of the analysis framework we propose here, that goes beyond the scope of the current paper. **"Virtually all of the results require using local stepsizes that scale with $1/H$, this is a tiny stepsize and does not reflect practice. It also means we can not recover results in the i.i.d. regime, which require large local stepsizes."** Thank you for raising this point. Note that the condition was already present in the original Scaffold paper. Removing the condition $\gamma H L \lesssim{} 1$ in the convergence of Scaffold to a stationary distribution this question is highly non-trivial. In fact, it is not clear that Scaffold converges to a stationary distribution for larger H: this is a very interesting open problem to solve in future work. **"1. I am not 100% sure if any of the papers on the second-order similarity assumption show a linear speedup; Do you think it’s possible to get a rate like $\zeta_2^2 / \mu^2$ here?""** To our knowledge, our result is the first to show that a method that corrects heterogeneity (at least without global step size) has linear speed-up. Reducing $L \zeta_2 / \mu^2$ to $\zeta_2^2 / \mu^2$ would require to remove the conditions $\gamma H L \lesssim{} 1$ which, as stated above, is a difficult problem. **"2. Can the Performance Estimation Problem [6] be used to derive an algorithm-specific lower bound for SCAFFOLD that can shed light on what the tightest rate we can expect to be?""** Performance estimation ideas are completely orthogonal to our work. To perform this kind of analyses, profound investigation is required. In particular, PEP relies on "interpolation conditions", which, to extend to the federated case, would require to choose an heterogeneity measure and adapt interpolation conditions accordingly. Nonetheless, they constitute an interesting perspective for future work, with the goal of achieving the best possible rate for Scaffold. --- Rebuttal Comment 1.1: Comment: Thanks for your response, I maintain my score.
null
null
null
null
null
null
Does One-shot Give the Best Shot? Mitigating Model Inconsistency in One-shot Federated Learning
Accept (poster)
Summary: The paper investigates one-shot Federated Learning (OFL), which aims to reduce the communication costs associated with traditional multi-round Federated Learning. The authors highlight that existing OFL methods face significant challenges due to "garbage in, garbage out" issues, where inconsistent local models lead to a degraded global model. The proposed solution, denoted as FAFI, introduces Self-Alignment Local Training (SALT) and Informative Feature Fused Inference (IFFI) to create consistent features in one-shot local models and enhance global model inference. Through extensive experiments, FAFI outperforms existing OFL methods by a significant margin. Claims And Evidence: Claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria (e.g., benchmark datasets) make sense for the problem at hand. Theoretical Claims: I checked the correctness of proofs for theoretical claims. Experimental Designs Or Analyses: The experimental design is comprehensive. However, the feature fusion part might introduce extra communication or computation, authors can provide more detailed interpretation of this. Supplementary Material: Yes. Relation To Broader Scientific Literature: This work is related to the one-shot Federated learning. The experiment results reveals general weakness of previous OFL methods. The Figure 4 also provides a holistic view to compare accuracy and communication costs of different OFL and multi-round FL methods. Essential References Not Discussed: For the feature alignment, there are some works also enhance the feature quality including [1,2,3], in which aggregating same-class features and repelling features of different classes are also proposed. For prototype based FL, works [1,4,5] are also related. Authors should consider adding them for discussion, what differences of the proposed feature alignment and prototype learning from them. Moreover, in Section 4.3, what is the difference between the proposed feature fusion and FuseFL? [1] Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning. In ICML 2022. [2] Model-Contrastive Federated Learning. In CVPR 2021. [3] FedImpro: Measuring and Improving Client Update in Federated Learning. In ICLR 2024. [4] Fedproto: Federated prototype learning across heterogeneous clients. In AAAI 2022. [5] No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. In NeurIPS 2021. Other Strengths And Weaknesses: Strengths: 1. The paper thoroughly identifies and analyzes the inconsistencies within and between one-shot local models, providing a theoretical foundation for the proposed method. 2. The introduction of Self-Alignment Local Training (SALT) effectively addresses intra-model inconsistencies by fostering invariant feature representations. 3. The use of Informative Feature Fused Inference (IFFI) on the server-side improves the aggregation process, leading to better global model performance. 4. Extensive empirical validation across multiple datasets demonstrates the effectiveness of the FAFI framework in a variety of scenarios, establishing its robustness. 5. The proposed solution achieves a notable accuracy improvement than baseline models, showcasing its potential impact on the field. Weaknesses: 1. The paper does not address the potential limitations and complexities of implementing the FAFI framework in real-world federated learning environments. 3. There may be a lack of comprehensive comparisons with other state-of-the-art methods that also aim to reduce communication costs, apart from the eleven baselines included. 4. Although the paper discusses privacy concerns, the utilized label information may still expose data privacy at some degree. 5. For the feature alignment, there are some works also enhance the feature quality including [1,2,3], in which aggregating same-class features and repelling features of different classes are also proposed. For prototype based FL, works [1,4,5] are also related. Authors should consider adding them for discussion, what differences of the proposed feature alignment and prototype learning from them. Moreover, in Section 4.3, what is the difference between the proposed feature fusion and FuseFL? 6. The paper could benefit from a more in-depth analysis of the computational efficiency and resource requirements of the FAFI framework compared to existing methods. 7. Some format errors. Line 216, the formula is out of box. 8. Some writing issues. Line 164, the \Delta_intra is not defined in the main texts. The augmentation function A is not clear, why we need an augmentation function A here. I guess the meaning of A in appendix is the global dataset. [1] Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning. In ICML 2022. [2] Model-Contrastive Federated Learning. In CVPR 2021. [3] FedImpro: Measuring and Improving Client Update in Federated Learning. In ICLR 2024. [4] Fedproto: Federated prototype learning across heterogeneous clients. In AAAI 2022. [5] No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. In NeurIPS 2021. Other Comments Or Suggestions: See weaknesses. Questions For Authors: See weaknesses. 1. Besides, to highlight the novelty, could you illustrate the detailed differences between the proposed methods with previous works? 2. In Line 69, ``we observe that better performance is always accompanied by larger parameter discrepancies.''. However, later analysis shows that the large discrepancy is not good so you want to minimize (Line 258). So, could you interpret this with more details? I'm willing to increase my score if addressing above weaknesses and questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the time in reviewing. We appreciate that you consider our presentation is clear, and our method is effective and robust. Please see our detailed feedback for your concerns below. **W1: Potential solutions and complexities of implementation.** **Ans for W1:** Thank you for your valuable comments. Indeed, FAFI may have limitations in handling domain shifts/multi-task scenarios. FAFI assumes that invariant feature representations should remain consistent or similar, enabling them to mitigate model inconsistencies. However, in domain shift/multi-task scenarios, feature representations often exhibit significant discrepancies across different domains or tasks, particularly in the isolated training paradigm. A potential solution is to cluster and ensemble feature representations across various domains or tasks, allowing unbiased representations to encapsulate multi-domain/task knowledge. Regarding implementation in real-world FL environments, as detailed in Algorithm.1(Appendix C), FAFI requires only two modifications compared to existing OFL methods, making it easily adaptable for both clients and the server: 1) On the client side, we train the feature extractor in a contrastive manner and use the prototypes instead of the classifier. 2) On the server side, fusion is performed at the feature level rather than the parameter or prediction level. We will highlight these in our revised version. **W2: Comparisons with other methods** **Ans for W2:** Thanks for your important comments. We have conducted new experiments comparing 3 methods, i.e., FedCompress, FedKD, and FedACG, which reduce the communication cost by gradient compression, gradient fractorization, and transferring gradient momentum. These new results on CIFAR-10 are provided below and added to our revision. Results show that FAFI achieves better efficiency compared with all baselines. |Methods|Acc.|Comm. Cost| |-|-|-| |FedCompress(C=1)|15.45|9.86MB| |FedKD(C=1)|20.67|34.89MB| |FedACG(C=1)|19.77|44.7MB| |FedCompress(C=50)|33.34|0.48GB| |FedKD(C=50)|60.12|1.70GB| |FedACG(C=50)|68.23|2.18GB| |Ours|77.83|44.7MB| **W3: Privacy issue** **Ans for W3:** Thanks for your very valuable comments. We agree with the statement *there is no free lunch for the privacy-utility trade-off*[1], for that the tenet of FL is to seek balance among efficiency, effectiveness, and privacy. FAFI, as well as most OFL methods, all aim to ensure efficiency for affordable deployment while striving to find the effectiveness boundaries as much as possible and maintain privacy below an acceptable level. As to FAFI, the possible leakage is the prototypes, which have been proven in previous works to compromise no privacy. For stricter privacy requirements, one potential solution is implementing differential privacy or using noised samples with the cost of some effectiveness degradation. **W4 & Q1: Comparisons with other methods** **Ans for W4 & Q1:** Thanks for your valuable comments. We present the difference between our proposed FAFI with feature-enhanced and prototype-based methods as follows: For feature-enhanced works, we note that FAFI is one-shot and selective enhances the feature at the inference stage for high performance. |Methods|Paradigm|Stage|Feature Enhancemnet| |-|-|-|-| |VHL|Mul.|Train|Generative| |MOON|Mul.|Train|Constrastive| |FedImpro|Mul.|Train|Estimative| |FuseFL|One-shot|Train|Adaptor-based| |**Ours**|**One-shot**|**Inference**|**Selective**| For prototype-based methods, FAFI aligns global prototypes by aggregating learnable local prototypes in one round. |Methods|Paradigm|Prototypes| |-|-|-| |VHL|Mul.|Generative| |FedProto|Mul.|Statistical| |CCVR|Mul.|GMM-based| |**Ours**|**One-shot**|**Learnable**| Mul. is short for multi-round. We will add these discussions in our revision. **W5: More in-depth efficiency analysis** **Ans for W5:** Thanks for your valuable comments. We have conducted the memory and computation cost analysis of FAFI. Due to limited space, please see the response to **Reviewer Chua W1 & Q1**. **W6 & W7 & Q2: Writing issues** **Ans:** Thanks for your careful reading. 1) The $\Delta_{intra} = \vert L(x, y) - L(x', y) \vert$ is the performance discrepencies between any two samples $x, x'$ with the same label $y$. For any two samples $x, x'$, we can find a function $A$ that satisfies $x' = A(x)$. $A$ can abstract the feature variation among samples. 2) The 'performance' in L69 represents the performance of the **local model**, we observe that better local performance is always accompanied by large parameter discrepancies(Fig.2-b). The performance in L258 denotes the performance of the **global model**. Sorry for these misunderstandings. We will revise these writing issues and format errors and provide more explanations for clarity. [1] No free lunch theorem for security and utility in federated learning. ACM TIST, 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. The provided new clarification is clear, and the differences from previous methods are clear. I'd like to raise my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 8kHb: We're grateful for your quick feedback during this busy period. We deeply appreciate your consideration in raising the score. Your constructive comments have significantly contributed to the refinement of our work. We will add all these above discussions in our final revision. Thanks a lot for your valuable comments! We will remain open and ready to delve into any more questions or suggestions you might have until the last moment. Best regards and thanks
Summary: Existing OFL methods focus on server-side aggregation, which falls into ‘garbage in garbage out’ pitfall. They unravel the root cause of such garbage inputs as intra-model and inter-model inconsistencies in the face of data heterogeneity. To address these, they design self-alignment local training and informative feature fused inference. Experimental results verify the effectiveness, scalability, and efficiency of the proposed method. Claims And Evidence: Clear and convincing claims. Methods And Evaluation Criteria: The method is suitable for the proposed model inconsistency problem. The evaluation criteria, accuracy on three classification datasets, is suitable. Theoretical Claims: Correct theoretical claims. The manuscript contains two theorems, Theorem 3.1 for intra-model inconsistency and Theorem 3.2 for inter-model inconsistency. Experimental Designs Or Analyses: I have assessed the soundness and validity of the experimental designs, and the authors have effectively validated the proposed method’s effectiveness. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: They provide a novel solution for mitigating model inconsistency. Existing OFL methods all focus on server-side aggregation based on inferior local models. This paper proposes to provide a high-quality local model through self-alignment local training, and utilizing the extracted feature for aggregation. Essential References Not Discussed: Almost all essential references have been included and discussed. Other Strengths And Weaknesses: 1. Strengths: 1) This manuscript proposes FAFI which achieves a good performance while requiring only one communication round. 2) The paper is the first to systematically identify the "garbage in, garbage out" pitfall in OFL caused by intra- and inter-model inconsistencies. The visualization and theoretical analysis on the model inconsistencies are interesting. 3) The authors utilize self-supervised learning methods and prototype learning for better local models and propose a feature fusion-based inference method. The feature fusion is novel in OFL. 4) FAFI is data-free, which does not require the source data or any other auxiliary information. 2. Weaknesses: 1) SALT consists of two parts, feature alignment, and category-wise prototype learning, and how these two parts contribute to mitigating the intra-model inconsistency. Please provide the ablation study on L_ssl and L_proto. 2) Noise impact in IFFI. How does the noise representation used for attention weighting in feature fusion (§4.3) impact the final performance. Please provide more evaluations to demonstrate the impact of the noise. Other Comments Or Suggestions: 1) Inconsistent descriptions of existing OFL methods categories (Sec. 2 and Sec. 5) Questions For Authors: 1. It is unclear how L_ssl and L_proto contribute to mitigating the intra-model inconsistency 2. How does the weights in feature fusion impact the performance. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for taking the time to review our work. We greatly appreciate you find our method is efficient and data-free. Please find our detailed responses to your concerns below. **W1 & Q1: Ablation study on $L_{ssl}$ and $L_{proto}$.** **Ans for W1 & Q1:** Thanks for your important comments. As suggested, we have conducted new ablation studies on $L_{ssl}$ and $L_{proto}$, and the results are shown below. | Feature Extractor | Prototypes / Classifier | CIFAR-10 | CIFAR-100 | Tiny-ImageNet | | ----------------- | ---------------------- | -------- | ---------- | ------------- | | $L_{ce}$ | Classifier + $L_{ce}$ | 17.34 | 6.45 | 8.31 | | $L_{ce}$ | Prototypes + $L_{ce}$ | 18.23 | 7.12 | 8.92 | | $L_{ce}$ | Prototypes + $L_{proto}$ | 52.34 | 33.34 | 23.12 | | $L_{ssl}$ | Classifier + $L_{ce}$ | 22.34 | 12.45 | 10.44 | | $L_{ssl}$ | Prototypes + $L_{ce}$ | 50.12 | 31.89 | 22.34 | | $L_{ssl}$ | Prototypes + $L_{proto}$ | 77.83 | 45.48 | 43.62 | We note that the performance of FAFI is significantly improved by using $L_{ssl}$ and $L_{proto}$ for enhanced feature representation and discriminative prototypes, and the combination of $L_{ssl}$ and $L_{proto}$ achieves the best. Additional details will be added in Appendix F in our revision. **W2 & Q2: Impact of the feature fusion strategy.** **Ans for W2 & Q2:** Thanks for your important comments. As suggested, we have conducted new evaluations on the impact of the feature fusion strategy with $Dir(0.1)$, and the results are shown below. | Feature Fusion Strategy | CIFAR-10 | CIFAR-100 | Tiny-ImageNet | | -------- | -------- | -------- | -------- | | Average | 72.83 | 37.48 | 30.12 | | $\mathcal{N}(0,5)$ | 71.83 | 35.40 | 28.09 | | $\mathcal{N}(0,10)$ | 70.12 | 33.02 | 26.34 | | $\mathcal{N}(0,50)$ | 68.23 | 31.12 | 24.12 | | $\mathcal{N}(1,1)$ | 69.12 | 32.77 | 25.12 | | $\mathcal{N}(0,1)$ (Ours) | 77.83 | 45.48 | 43.62 | We note that $\mathcal{N}(0,1)$ facilitates the best performance in the tested cases, as it effectively captures feature information compared to other strategies. Additional details and discussions about this hyperparameter will be included in Appendix F in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. After reading the author's rebuttal, the main concerns I had have been addressed, and I will maintain my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer BvHZ: We're grateful for your quick feedback during this busy period. We deeply appreciate your consideration in raising the score. Your constructive comments have significantly contributed to the refinement of our work. Thanks a lot for your valuable comments! We will remain open and ready to delve into any more questions or suggestions you might have until the last moment. Best regards and thanks
Summary: This manuscript tries to solve the 'garbage in garbage out' problem caused by inconsistent models in the one-shot federated learning paradigm. They propose FAFI, a novel OFL framework consisting of two key components: SALT for invariant feature learning and IFFI for server-side feature fusion-based inference. Extensive experiments on CIFAR-10/100 and Tiny-ImageNet demonstrate FAFI’s superiority over 11 baselines, achieving a 10.86% average accuracy improvement. Claims And Evidence: The claims made in this manuscript are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method and evaluation criteria are suitable for the problem. Theoretical Claims: The theoretical claims in this paper are correct. However, some typos in appendix, line 653 $Vert(x-A(x))||^2>0$ should be $|| x - A(x) ||^2>0$ ? Experimental Designs Or Analyses: The experimental parts are well-organized, the demonstrations on main paper and appendix are good. Better presenting more results on efficiency. Supplementary Material: No supplementary material. Relation To Broader Scientific Literature: The paper situates its contributions within the broader federated learning (FL) literature by empirically and theoretically analyzing the limitations of existing one-shot FL (OFL) methods. The paper explicitly differentiates FAFI from related approaches in Sec.4.4, including prototype-based methods and model merging techniques. Essential References Not Discussed: The paper has cited the related works that are essential to understanding the key contribution. Other Strengths And Weaknesses: Strengths: 1. The paper is well-demonstrated and well-organized. The figures, such as Figure 1(a), Figure 2(a), and Figure 3 are good. 2. The paper is well-motivated. The analysis of intra- and inter-model inconsistencies is supported by both empirical evidence (Grad-CAM visualizations) and theoretical proofs (Theorems 3.1 and 3.2). 3. The experimental part is well-organized. Evaluations across diverse non-IID settings and clients scales over 11 baselines validate FAFI's effectiveness, scalability, and efficiency. 4. The method significantly reduces communication overhead while outperforming multi-round FL baselines in efficiency-accuracy trade-offs (Figure 4). This aligns well with real-world FL deployment needs. Weaknesses: 1. Details on the noise in IFFI (eq. 8) are unclear. It is better to provide code and hyperparameter settings. 2. The efficiency on extreme non-IID settings, such as $\alpha=0.05$. Figure 4 only provides the evaluations on $\alpha=0.5$. Please provide more settings to verify its efficiency. Other Comments Or Suggestions: 1. Some typos in appendix, e.g., line 653 $Vert(x-A(x))||^2>0$ should be $|| x - A(x) ||^2>0$. 2. It is recommended that open-source code be provided to replicate the experimental results. Questions For Authors: 1. Please provide the details about the noise in IFFI. 2. Please provide more settings to verify the efficiency of FAFI. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for taking the time to review our work. We greatly appreciate you find the paper is well-motivated and well-demonstrated. Please find our detailed responses to your concerns below. **W1 & Q1: Details in IFFI.** **Ans for W1 & Q1**: Thanks for your important comments. Sorry for missing the explanations about IFFI. We use the Gaussian distribution $\mathcal{N}(0, 1)$ in Equ.8. For clarity, we will add an ablation study on the hyperparameters. More details can be seen in response to **Reviewer BvHZ W2 & Q2** and will be added in Appendix E in the revision. **W2 & Q2: More settings to verify the effectiveness of FAFI.** **Ans for W2 & Q2:** Thanks for your important comments. We have conducted new evaluations on CIFAR-10 with data heterogeneity $Dir(0.05)$ and $Dir(0.1)$, and the results are shown below. | Methods | Acc. $Dir(0.05)$ | Acc. $Dir(0.1)$ | Comm. Cost | | ------------- | - | - | - | | MA-Echo | 36.77 | 51.23 | 44.7 MB | | O-FedAvg | 12.13 | 17.43 | 44.7 MB | | FedFisher | 40.03 | 47.01 | 48.2 MB | | FedDF | 35.53 | 41.58 | 44.7 MB | | F-ADI | 35.93 | 48.35 | 44.7 MB | | F-DAFL | 38.32 | 46.34 | 44.7 MB | | DENSE | 38.37 | 50.26 | 44.7 MB | | Ensemble | 41.36 | 45.43 | 44.7 MB | | Co-Boosting | 39.20 | 58.49 | 44.8 MB | | FuseFL | 54.42 | 73.79 | 53.32 MB | | IntactOFL | 48.22 | 61.13 | 44.7 MB | | FedAvg $C=50$ | 23.45 | 27.44 | 2.12 GB | | FedProx $C=1$ | 13.53 | 17.58 | 44.7 MB | | FedProx $C=50$ | 23.32 | 26.76 | 2.12 GB | | SCAFFOLD $C=1$ | 12.45 | 16.23 | 89.4 MB | | SCAFFOLD $C=50$ | 27.22 | 30.45 | 4.36 GB | | FedCav $C=1$ | 12.49 | 16.77 | 44.8 MB | | FedCav $C=50$ | 26.45 | 30.23 | 2.12 GB | | FedProto $C=1$ | 12.11 | 16.23 | 44.7 MB | | FedProto $C=50$ | 28.31 | 32.55 | 2.12 GB | | FedDC $C=1$ | 11.32 | 15.23 | 44.7 MB | | FedDC $C=50$ | 30.23 | 44.23 | 2.12 GB | | Ours | 71.84 | 77.83 | 44.7 MB | $C$ is the communication rounds. **We note that FAFI can achieve competitive performance even in extreme heterogeneous scenarios (i.e., $Dir(0.05)$)**, requiring only 44.7 MB of communication cost. Additional details will be included in Appendix F in our revised version. **C1 & C2: Typos and open source code** **Ans for C1 & C2**: Thanks for your comments. We will fix the typos in Appendix A, and we promise we will open the source code.
Summary: The paper addresses the critical challenge of model inconsistency in OFL due to heterogeneous data. They identify two key inconsistencies—intra-model and inter-model—and propose a novel framework, FAFI, which combines client-side self-aligned training (SALT) and server-side informative feature fusion (IFFI). Extensive experiments on three classification datasets demonstrate significant performance improvements over 11 baselines. Claims And Evidence: The claims are clear and convincing. Methods And Evaluation Criteria: FAFI consists of SALT and IFFI for intra- and inter-model inconsistencies, respectively. Evaluations on CIFAR-10/100 and Tiny-ImageNet are appropriate. Theoretical Claims: I have checked the theoretical claims and proofs. Theorem 3.1 for intra-model inconsistency, Theorem 3.2 for inter-model inconsistency, and their proofs in the appendix are correct. Experimental Designs Or Analyses: I have checked experimental designs or analyses. Supplementary Material: The authors have not provided the supplementary material. Relation To Broader Scientific Literature: Unlike approaches that focus on server-side aggregation, this paper emphasizes improving local training strategies to achieve better models and mitigate the ‘garbage in, garbage out’ pitfall. Essential References Not Discussed: References are essential and sufficient for understanding the key contributions. Other Strengths And Weaknesses: Strengths: - The proposed method is technically sound. Moreover, leveraging self-supervised methods and prototype learning is novel in OFL scenarios - Comprehensive survey on existing OFL methods and good discussion on analogous methods, such as prototype-based FL and model merging approaches. - Significant performance improvement and extensive and sufficient evaluations. Weaknesses: - Computational costs for SALT’s contrastive learning (e.g., batch size=256) seem to be expensive for resource-constrained clients, which could be limited in edge scenarios with restricted computational capabilities. - The reliance on class-wise prototypes assumes a fixed and known number of classes, which may not hold in dynamic or open-set FL scenarios. - Some notions in theoretical analysis lacks of explanations. - Federated learning under heterogeneous model scenarios is a very practical research problem, the model architectures across different clients are different. FAFI only consider the homogeneous model scenarios. Other Comments Or Suggestions: No Questions For Authors: - My main concern with this work is its applicability to resource-constrained scenarios, such as edge or mobile environments. In these scenarios, they tend to use more lightweight models~(MobileNet). Can FAFI perform well on some lightweight models? Besides, the diverse computation capabilities across clients restrict the clients from adopting the models with the same architecture, Can FAFI support heterogeneous model architectures? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for taking the time to review our work. We greatly appreciate your recognition of the proposed method as both technically sound and high-performing. Please find our detailed responses to your concerns below. **W1 & Q1: Applicability to resource-constrained scenarios.** **Ans for W1 & Q1:** Thanks for your important comments. We note that **our proposed FAFI can support lightweight models and is applicable to resource-constrained scenarios**. We have complemented statistics on computational and memory cost with MobileNet and ResNet-18 on CIFAR-10. | Methods            | Memory Cost | Computation Cost (GPU/ CPU)  | Accuracy | |-|-|-|-| |MobileNet| | O-FedAvg | 1942 MB | 6s / 138s | 10.44 | | IntactOFL | 1942 MB | 8s / 141s | 32.31 | | Ours $b$=32 | 1086 MB | 11s / 183s | 55.21 | | Ours $b$=256 | 1838 MB | 14s / 227s | 58.33 | | Ours $b$=512 | 3082 MB | 17s / 240s | 59.64 | | ResNet-18 | | O-FedAvg | 4638 MB | 8s / 197s | 12.13 | | IntactOFL | 4638 MB | 10s / 204s | 48.33 | | Ours $b$=32 | 2789 MB | 18s/ 307s | 69.73 | | Ours $b$=256 | 8792 MB | 27s / 319s | 70.24 | | Ours $b$=512 | 14723 MB | 32s / 428s | 71.84 | We use the GPU memory occupied during local training as the metric for memory cost, while computation cost is measured by the time taken per epoch on GPU(RTX 4090) and CPU(Intel Core i7-11700K). Here, $b$ represents the batch size. We believe that FAFI can achieve competitive performance even in resource-constrained scenarios, requiring only 11 seconds on a GPU with less than 2GB of memory or approximately 3 minutes on a CPU. Additional details will be included in our revised version. **W2: Assumption on the number of classes, which may not hold in dynamic or open-set FL scenarios.** **Ans for W2:** Thanks for your comments. FAFI assumes that the number of classes is known in advance. However, in dynamic or open-set FL scenarios, the number of classes may change over time. We note that FAFI can be easily extended to these scenarios by extracting the invariant features of the new classes and learning a new discriminative prototype for them. We will include this discussion in our revision. **W3: Some notions are not clear.** **Ans for W3:** Thanks for your comments. Sorry for the unclear notations. For clarity, 1) the $\Delta_{intra} = \vert L(x, y) - L(x', y) \vert$ is the performance discrepancies between any two samples $x, x'$ with the same label $y$; 2) $A$ refers to the function that can abstract the feature variation among samples. We will clarify these notations in the revision. **W4 & Q1: Model heterogeneity scenarios.** **Ans for W4 & Q1:** Thanks for your comments. We note that FAFI can support heterogeneous models. We have conducted new evaluations on heterogeneous models with five different architectures (LeNet, ResNet-18, VGG, MobileNet, ResNet-50) on CIFAR-10 with data heterogeneity $Dir(0.1)$ as follows and added more details in Appendix F. We only report part of them here due to the limited space. | Client 0 | Client 1 | Client 2 | Client 3 | Client 4 | IntactOFL | Ours | | -------- | -------- | ------- | ------- | ------- | --------- | ---- | | LeNet | ResNet-18 | VGG | MobileNet | ResNet-50 | 48.33 | 71.84 | | VGG | MobileNet | ResNet-50 | LeNet | ResNet-18 | 52.55 | 72.12 | | ResNet-50 | LeNet | ResNet-18 | VGG | MobileNet | 54.12 | 72.45 | --- Rebuttal Comment 1.1: Comment: Thank the authors for further experimental analysis and clarifying the rationale, which have addressed most of my questions and concerns. I appreciate this manuscript's interesting task, clear presentation, and extensive experiments. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Chua: We're grateful for your quick feedback during this busy period. We deeply appreciate your consideration in raising the score. Your constructive comments have significantly contributed to the refinement of our work. Thanks a lot for your valuable comments! We will remain open and ready to delve into any more questions or suggestions you might have until the last moment. Best regards and thanks
null
null
null
null
null
null
STNet: Spectral Transformation Network for Solving Operator Eigenvalue Problem
Reject
Summary: The authors are interested in solving an eigenvalue problem for a differential operator. Claims And Evidence: The authors claim to approximate the eigenvalues with higher accuracy than competing approaches, which they do, but I am not sure about the overall performance as the competition seems really terrible. Methods And Evaluation Criteria: The authors use interesting eigenvalue problems but since they are always giving absolute errors this could be misleading, also the performance shown in this measure fails to give two decently converged eigenvalues for a one-dimensional problem. Again the better results for higher dimensions could not be meaningful given that the errors are absolute. Theoretical Claims: There are no proofs in the manuscript. Experimental Designs Or Analyses: The authors compare for interesting eigenvalue problems and compare to other methods, albeit its not clear this is always done appropriately. Supplementary Material: I briefly checked the parts on the fundamentals of the eigenvalue method and the deflation procedure. Relation To Broader Scientific Literature: The authors cite the generic numerical literature for eigenvalue problems. Essential References Not Discussed: There might be specific methods tailored for the PDE eigenvalue problems that the authors did not cite. All their references for EVPs are very generic, like books by Saad or Golub/van Loan. Other Strengths And Weaknesses: ** Weakness** For the computation of first few eigenvalues it would actually be possible to use sparse grids to obtain accurate results in 5 dimensions. So if the goal is to compute first few eigenfunctions, the authors should compare this method against existing sparse grid methods to get high accuracy. Difficult to follow derivation of the suggested method and it is also unclear to me what the advantage is of a neural network based approach. I think the convergence behaviour is not well understood, which for the shift-and-invert method requires some spectral gap. This could have been exploited more. Comparison to outside of learning based approaches seems not sufficient (see sparse grid comment and in lower dimensions this should be able to give very accurate results). Other Comments Or Suggestions: Notation for the neural network as $NN$ confusing as in the next line $N$ appears as the number of sampling points. Most of the references are formatted poorly. Capitalize the names of Krylov, Schur, Fokker, Planck, etc Questions For Authors: How does the method compare to other traditional schemes and the discretized eigenvalue problems. Code Of Conduct: Affirmed. Overall Recommendation: 2
Summary: This paper focus on solving eigenvalue and eigenfunction problem. Numerical methods suffer from the curse of dimensionality. There is a tread of attacking the problem with methods of deep learning, e.g. NeuralEF [1], NeuralSVD [2], etc. The authors try to improve the existing methods in terms precision. Their improvement is entirely based on two classic techniques in eigenvalue problems, i.e., deflation projection and filter transform. The implementation is straightforwardly summarized in Algorithm 2. These techniques are complementary to and thus can be combined with the work on neural network side. Based on diversified experiments, the proposed method exhibited promising results that outperformed all baseline models on accuracy with the same number of iterations. [1] Deng, Z., Shi, J., and Zhu, J. Neuralef: Deconstructing kernels by deep neural networks. In International Conference on Machine Learning, pp. 4976–4992. PMLR, 2022. [2] Ryu, J. J., Xu, X., Erol, H., Bu, Y., Zheng, L., and Wornell, G. W. Operator svd with neural networks via nested lowrank approximation. arXiv preprint arXiv:2402.03655, 2024. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I checked proofs in A.2 and A.3 and found no issues except a typo. Experimental Designs Or Analyses: Yes. Supplementary Material: N.A. Relation To Broader Scientific Literature: N.A. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: **Significance** The proposed method is of high value because of simplicity and effectiveness. It is purely based on classical techniques, e.g., deflation projection and filter transform, which is complementary to improvements on neural networks. **Weakness** 1. In Section 2 Related Works, the last paragraph seems a little irrelevant or the connection to the subject of this paper is not well elaborated. 2. The current experiment does not contain comparison wrt convergence speed. Would it be beneficial to present such comparison? Other Comments Or Suggestions: Typo: 1. Line 613, $\bf{A}_1=\bf{A}-\sigma \bf{v}_1 \bf{v}_1^H \rightarrow \bf{A}_1=\bf{A}-\sigma \bf{v}_1 \bf{v}_1^T$ Questions For Authors: What is the particular reason that in Table 1, NeuralEF does not have values for Dim=2 and Dim=5? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score and your confidence. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our work. ## **Other Strengths And Weaknesses 1** > In Section 2 Related Works, the last paragraph seems a little irrelevant or the connection to the subject of this paper is not well elaborated. - Sorry for this problem. This problem will be corrected in future versions of the paper. ## **Other Strengths And Weaknesses 2** > The current experiment does not contain comparison wrt convergence speed. Would it be beneficial to present such comparison? - Thank you for your suggestions. Here’s experimental evidence to clarify our claims: - 2D Harmonic Operator Case Study: We analyzed how NeuralSVD and STNet improve accuracy as iterations progress. The trend of absolute error reduction is shown in this figure: [Convergence Comparison](https://anonymous.4open.science/r/rebuttal2-534E/rebuttal1.3.pdf). - Key Takeaway: STNet achieves significantly faster convergence than NeuralSVD. This stems from STNet’s design, which mimics the power method’s iterative approach to eigenvalue estimation. We will add more detailed analysis and experiments in the final version of the paper. ## **Other Comments Or Suggestions** > Typo: Line 613 - We apologize for the confusion caused. This error will be corrected in future versions of the paper. ## **Questions** > What is the particular reason that in Table 1, NeuralEF does not have values for Dim=2 and Dim=5? - As described on line 311 of the paper, NeuralEF encountered numerical instability in the 2D and 5D harmonic operator problems. The resulting errors were significantly larger than the target eigenvalues, making the data unsuitable for meaningful comparison. Thus, we excluded these results. - NeuralEF’s inability to accurately resolve eigenvalues with relatively small magnitudes may stem from stochastic optimization or numerical errors. This limitation is also noted in NeuralEF’s original paper ([2], page 9, Section 6, paragraph 2). - A similar issue is observed in NeuralSVD’s study. For the 2D harmonic oscillator experiment ([1], page 8, Figure 4b), NeuralEF exhibits a relative error of over 100%, which strongly aligns with our conclusions. [1] Operator SVD with Neural Networks via Nested Low-Rank Approximation, ICML 2024, https://github.com/jongharyu/neural-svd [2] NeuralEF: Deconstructing Kernels by Deep Neural Networks, ICML 2022
Summary: The paper introduces the Spectral Transformation Network for solving operator eigenvalue problems, addressing challenges posed by high-dimensional operators. STNet uses deflation projection to remove the subspace corresponding to already-computed eigenfunctions, ensuring that the network does not converge to the same eigenpair repeatedly and reducing the effective search space. STNet also uses filter transform to amplify eigenvalues in the target region while suppressing others, thereby increasing the spectral gap and accelerating convergence. Experiments on the harmonic eigenvalue problem, the Schrodinger oscillator equation, and the Fokker–Planck equation demonstrate that STNet achieves SOTA accuracy, outperforming existing deep learning-based methods and traditional numerical approaches in high-dimensional settings. ## update after rebuttal I thank the authors for the updated information. I maintain a positive score for this submission. Claims And Evidence: The claim that STNet achieves state-of-the-art accuracy in computing operator eigenvalues compared to existing deep learning and traditional methods is well supported by diverse experiments. STNet's performance improvement is significant. The effectiveness of two components of STNet, deflation projection and filter transform, are both validated through ablation. Methods And Evaluation Criteria: The baselines and evaluation criteria are appropriate for the operator eigenvalue problems at hand. Theoretical Claims: There is no proof provided in the main paper. I did not verify the proofs in the appendix. Experimental Designs Or Analyses: The experimental designs and analyses in the paper are sound to me. They provide convincing evidence for the superior performance of STNet, with appropriate benchmarks and metrics. Supplementary Material: No. Relation To Broader Scientific Literature: There are no significant connections to broader scientific literature in my view. Essential References Not Discussed: I don't notice any essential references are missing. Other Strengths And Weaknesses: The paper presents a novel idea and is clearly written. The experiments show significant improvements. Other Comments Or Suggestions: I recommend that the authors add visualizations to clearly illustrate the effects of both deflation projection and filter transform. This would greatly enhance the reader's intuition and understanding of these mechanisms. Questions For Authors: I have no additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score and your confidence. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our work. ## **Other Comments Or Suggestions** > I recommend that the authors add visualizations to clearly illustrate the effects of both deflation projection and filter transform. This would greatly enhance the reader's intuition and understanding of these mechanisms. - Thank you for this excellent suggestion. We have prepared preliminary visualizations that effectively demonstrate the spectral transformations achieved by these two key components: 1. **Deflation Projection** (shown in https://anonymous.4open.science/r/rebuttal2-534E/rebuttal1.1.pdf): 1. This operation maps already-computed eigenvalues to zero while preserving other target eigenvalues 2. The visualization clearly shows how this prevents solved eigenvalues from interfering with subsequent computations 2. **Filter** **Transform** (shown in https://anonymous.4open.science/r/rebuttal2-534E/rebuttal1.2.pdf): 1. This transformation modifies the spectral distribution by:1. Amplifying the region containing eigenvalues of interest. 2. Compressing less relevant spectral regions 2. The effect significantly improves the solvability of target eigenvalues - For the final version, we will refine these visualizations for greater clarity.
Summary: This paper proposes a method to find eigenfunctions of a given operator using neural networks. The idea is to combine ideas from numerical linear algebra to train neural networks to fit underlying eigenufnctions: (1) power method, (2) deflation projection, and (3) filter transform. The experiments are performed for different differential operators. ## update after rebuttal I appreciate the authors' response and I have increased my score to 2. I believe the empirical results are promising and interesting (including the significantly smaller number of parameters), assuming the experiments were performed as described in the response. That said, I find that several descriptions, such as those of the objective functions, implementations, and differences from existing methods, remain ambiguous and could be significantly streamlined to demonstrate its merit. In the case of acceptance or future submission, please carefully revise the manuscript to address these issues. Claims And Evidence: There are experiments for three PDEs, but the performance of baseline methods (especially NeuralEF and NeuralSVD) do not seem to be in the reasonable range; see below (Other Comments and Questions) for detailed comments. Methods And Evaluation Criteria: - Compared to the existing methods NeuralEF and NeuralSVD, the difference seems to be in the use of power method and the idea of filter transform, which is a nice addition to the learning framework. - One thing unclear is the core difference of this STNet to the power method in (Yang et al., 2023). - A similar idea of the deflation technique was proposed to be applied to NeuralEF and NeuralSVD in the terminology of "sequential nesting" (Ryu et al., 2024). - Compared to NeuralEF or NeuralSVD, where the objective function is well defined and characterizes the desired ordered eigenfunctions in the order of eigenvalues, the objective function of STNet is not properly described. Eq. (8) is mentioned as the optimization problem, but it is not computable as it does not have access to the underlying eigenfunction $v_i$'s. It is then mentioned in the first paragraph of Section 4.2 that `Since neural networks cannot directly implement the inverse operator, we enforce ... through a suitable loss function.`, but there is no loss function defined other than the loss in line 10 of Algorithm 2. It is not properly explained how this loss function is derived. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: There are some flaws and issues in the experimental setup. See comment below. Supplementary Material: I went over the supplementary material carefully including the implementation details. Relation To Broader Scientific Literature: Many scientific and ML problems can be formulated via operator eigenvalue problems, and thus solving them in an efficient manner is of great importance. Using neural networks can help circumvent the curse of dimensionality. Improved techniques for operator eigenvalue problems are desirable and can lead to important breakthroughs, especially in physical simulations. Essential References Not Discussed: References seem adequate. Other Strengths And Weaknesses: On top of the incompleteness of the methodology part (the missing comparison of STNet to (Yang et al. 2023) and missing justification of the loss function), I have a serious concern about the experimental results, which are detailed below. Other Comments Or Suggestions: - This sentence is incorrect: `This variability primarily arises because NeuralEF and NeuralSVD employ a uniform grid to acquire data points, whereas STNet uses uniform random sampling.` The NeuralEF paper did not consider the operator eigenvalue problem, and there is no reason to say that "NeuralEF employs a uniform grid". Also, the NeuralSVD paper proposed to draw fresh sample for every minibatch from a sampling distribution, which doesn't necessarily be uniform; see Appendix D.3 for the implementation with importance sampling and Appendix E.1.2 for that Gaussian distribution was used for harmonic oscillator in (Ryu et al., 2024). At a higher level, NeuralEF and NeuralSVD are optimization frameworks to find eigenfunctions, and they do not need to be necessarily associated with a specific sampling scheme. This makes the comparisons in the paper questionable. - It is also unclear if STNet used "uniform random sampling" or "uniform grid", as the description in p.14 for STNet also indicates that something like `For the 1-dimensional problem, the number of points is 20, 000, ...`. Questions For Authors: - Why are the neural network architectures used for NeuralEF/NeuralSVD and STNet are different? Since all methods parameterize eigenfunctions, I think the most natural thing is to use same architecture throughout for a fair comparison. - For a more informative comparison, when reporting accuracies of eigenvalue estimates, relative errors might be a better metric rather than absolute errors. Or, at least, indicate the true eigenvalues. - How were the results for NeuralEF and NeuralSVD obtained? For example in Table 1, the results of NeuralEF and NeuralSVD are unreasonably bad even for dim=1,2, which is inconsistent to the fairly reasonable performance reported in the NeuralSVD paper for other operators. Results in Table 3 for harmonic oscillators is also inconsistent to what's reported in (Ryu et al., 2024); see Fig. 7(b). - Why are the rows missing for NeuralEF in Table 1? Given all these concerns, I believe that the experiments in this paper should be reexamined. The pros and cons of the proposed framework should be also carefully explained in the methodology section, especially compared to NeuralEF and NeuralSVD, which are based on well-defined optimization problems. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score and your confidence. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our work. ## Methods 1 The core difference between STNet and PMNN is that STNet uses Deflation Projection and Filter Transforming for better approximation: - Deflation Projection prevents STNet from converging to the predicted eigenfunctions, so that STNet can predict multiple eigenfunctions, while PMNN fails to do so. Also, the ablation study in Sec 5.5 of Deflation Projection demonstrates our contribution. - Filter Transforming is spectral transforming, improving STNet's convergence performance. And that's the reason why STNet outperforms PMNN. Also, we have the ablation study in Sec 5.5 to verify this contribution. STNet and PMNN are all based on power method, but STNet also employs the Deflation Projection and Filter Transform. ## Methods 2 We argue that Deflation Projection is different from Sequential Nesting: - Different Ideas: The idea of Deflation Projection comes from power method, where we need a more powerful loss for iterative updating. While Sequatial Nesting takes eigenvalues problem as optimal problem, so they employ LoRA in Sequential Nesting serving for optimal problem. - Different Loss Functions: As stated in Eq 9 at Line 204, STNet approximates the power method, so the loss function here is norm of difference between functions. But NeuralSVD employs inner product of two functions for loss function. - Different Mathematical Purposes: **Deflation Projection performs spectral transforming** of operator problem, **which theoretically excludes the solved eigenfunction**. Sequential Nesting introduces LoRA for avoiding converging to the same eigenfunction. Generally, Deflation Projection is different from Sequential Nesting. To avoid this misunderstanding, we will add theoretical analysis to stress the difference. ## Methods3 1. Methodological Differences: • NeuralEF/NeuralSVD reformulate eigenvalue problems as optimization tasks with custom loss functions. • Our approach directly mimics the power method through neural updates, enhancing performance via spectral transformations rather than problem reformulation. 2. Clarification of Eq. 8: • This equation represents STNet's theoretical objective (approximating target eigenfunctions), not a computational procedure. 3. Loss Function Implementation: • The loss function design is detailed in Section 4.2 (Line 181). • We will include a complete derivation in the final version for clarity. ## Comments 1. Yes, both NeuralSVD and NeuralEF use resampling for point selection. This was an oversight in our writing, and we will correct this section in the final version. 2. For our experiments, we used the official implementation of [1], following its default sampling settings in their paper. Importantly, this writing error does not affect the validity of our results. We sincerely apologize for the confusion. 3. For consistency, STNet uses random uniform sampling throughout all experiments. Specifically, we initialize with 20,000 randomly sampled points (as mentioned on lines 294 and 374), which are reused during iterations. ## Q1 1. The model architectures (depth and width) for [1] and [3] in our experiments were taken directly from the official code [1]. Similarly, the PMNN model parameters were adopted from its official implementation [2]. For STNet, we kept its architecture identical to PMNN to enable a fair comparison. 2. Parameter Efficiency: [1] requires ~100,000 parameters per eigenvalue-solving module, while STNet uses only 1,500 parameters per module. This further highlights STNet’s efficiency and strong performance despite its simplicity. ## Q2 1. In cases where eigenvalues can be zero (e.g., the Fokker-Planck Equation example on page 7, line 370), using the relative error formula becomes impossible (since dividing by zero is invalid). To maintain consistency, all eigenvalue errors in our experiments are reported using the absolute error. 1. For every operator, we explicitly provide the true eigenvalues before analyzing experimental results. 1. The Harmonic Eigenvalue Problem is listed on page 6, line 300 2. The Schrödinger Oscillator Equation is on page 9, line 326 3. The Fokker-Planck Equation is on page 10, line 346 ## Q3 - In all experiments, [3] and [1] were implemented using the official code from [1]'s GitHub repository. We only modified the target differential operator being solved. For the problems in Table 3, we strictly used the original code without any modifications. All reported results are authentic and reproducible. ## Q4 - Please see the response to reviewer 6EVU **Questions** [1] NeuralSVD [2] PMNN [3] NeuralEF
null
null
null
null
null
null
Distributed Differentially Private Data Analytics via Secure Sketching
Accept (poster)
Summary: In this paper the authors are attempting to find a way to build solutions that have utility as high as in case of central DP, but have as little assumptions as in case of local DP. The most common example of such an approach is privacy amplification by shuffling. However, the number of mechanisms that could use shuffling is limited. Hence, the paper proposes to use linear transformations instead of shuffling. The paper further suggests some applications of this idea to linear regression and low-rank approximation. Claims And Evidence: The evidence sufficiently supports the claims. Methods And Evaluation Criteria: Methods and evaluations are appropriate for the problem at hand. Theoretical Claims: The proofs presented in the paper are correct. Experimental Designs Or Analyses: The experimental design seems valid. Supplementary Material: I haven't reviewed supplementary material. Relation To Broader Scientific Literature: While this is the first paper discussing use of MPC to replace linear transformations in the central DP model, it is not the first paper (as admitted by the paper itself) that discusses the idea of using MPC to avoid assumptions on central aggregators. The paper proposes alternative algorithms for low-rank approximation and for ridge regression which are better than the one achievable in the local model, but worse than the central model (which is not a surprise). Essential References Not Discussed: The paper doesn't lack any specific reference; however, the discussion of shuffle model and other deployments where the central server is replaced by MPC is lacking details. Other Strengths And Weaknesses: The concept of decentralizing the traditional central server model and replacing it with an MPC protocol is a driving force behind the current push to apply DP in real-world scenarios. Given this trend, the design of algorithms that can be easily translated into straightforward MPC protocols is of paramount importance. Unfortunately, the paper under review tackles problems that lack immediate, real-world applications. This disconnect between the theoretical focus of the paper and practical applicability makes it challenging to argue convincingly for the significance of its results. Especially considering that the theoretical results are not very complex either. Other Comments Or Suggestions: N/A Questions For Authors: Your definition of corrupted clients / servers seems unusual to me since it assumes that they are faithfully following the protocol and we just know their result without privacy protections. Is this correct? (If yes, I would suggest calling them spoofed or something similar.) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and questions. > The discussion of shuffle model and other deployments where the central server is replaced by MPC is lacking details. Regarding missing details on replacing trust in a central server by MPC, ​​we made an extensive comparison in Section 3.3. A complete introduction for MPC is out of scope, but we give some details for our constructions in the appendix. If the reviewer thinks specific points are missing, we would be grateful for pointers. > Your definition of corrupted clients / servers seems unusual to me since it assumes that they are faithfully following the protocol and we just know their result without privacy protections. Is this correct? (If yes, I would suggest calling them spoofed or something similar.) The privacy guarantees will hold even if a large fraction of corrupted parties deviate from the protocol description. We currently only require the semi-honest assumption for the utility guarantees.
Summary: The authors introduce the linear-transformation model (LTM) for distributed differentially private data analytics. The main question that the authors aim to address is "what is the least expressive F need to be securely implemented such that distributed DP utility is comparable to that of the central model". In LTM, the key idea is to leverage efficient secure multiparty computation (MPC) techniques to implement the linear transformation in a distributed setting. Interestingly, the authors have provide a new definition (Definition 3.1) Trusted Computation Model for Differential Privacy when there are colluded clients (with Adversary). The authors also discussed the usage of LTM for tasks such as estimating frequency moment and low-rank approximation. They support their contributions with detailed theoretical utility and privacy guarantees and also an empirical evaluation using real-world datasets (Table 2) demonstrating the error is close to that of the central setting. Claims And Evidence: I find most of the claims to be supported. Methods And Evaluation Criteria: Section 5 evaluate the proposed approach for low rank approximation and the evaluation is convincing for small epsilon values. Theoretical Claims: Didn't check all the proofs in detail, but the result looks correct. Experimental Designs Or Analyses: I'm wondering what happens when the privacy budget is large (when $\epsilon > 1$)? Supplementary Material: No. Relation To Broader Scientific Literature: This work has good potential to influence future research in improving the privacy-utility trade-offs of differential privacy using crypto tehcniques. Essential References Not Discussed: No. Other Strengths And Weaknesses: The idea is sound and the author provide detailed theoretical analysis for the proposed algorithms. Other Comments Or Suggestions: D1. Please move Figure 1 to page 2. D2. Please either user different line style/ticks for all figures. For example, it is super hard to read Figure 2 based only on the color coding. Questions For Authors: Q1: What happens with lager privacy budget? Q2: Please consider to add an explicit threat model. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and questions. We will incorporate the useful editorial suggestions about our Figures in the final version. > What happens with lager privacy budget? We discuss our choice of privacy budget $\epsilon$ in the experiments in our answer to reviewer sQPT as well. We chose relatively small values, to showcase the separation of different mechanisms in the LTM, which is more prominent when more noise is added. Increasing $p$ for our Gaussian noise mechanism in the LTM while fixing $n$, can be interpreted as increasing $\epsilon$, so Figure 2 shows that increasing $\epsilon$ decreases the error significantly in all settings and for all values of $n$. > Please consider to add an explicit threat model. We will further clarify the threat model in the next revision and make it explicit already at the beginning of Section 3. Specifically, in our privacy analysis, we assume that up to $t’ < n$ clients and up to $t < k$ servers may collude and share information with each other, while otherwise following the protocol honestly. This type of adversarial behavior is commonly referred to as passive, semi-honest, or honest-but-curious in the cryptography literature.
Summary: The paper introduces the Linear Transformation Model (LTM), a new differential privacy (DP) trust model between central DP, where a single party is trusted, and local DP, where no party is trusted. In the LTM, a linear computation can be used to transform a set of private inputs of parties before they are revealed. This is especially suitable to Multiparty Computation (MPC) techniques based on a set of servers where at least one of them does not collude with the adversary, as linear computations can be performed efficiently with such cryptographic tools. The paper shows the accuracy gain of the LTM with respect to the Local Model of DP. In addition, it provides arguments of its better suitability with respect to the Shuffle Model, which also lies in between central and local DP. Claims And Evidence: - The main claim of the paper is that the LTM provides substantial accuracy advantages with respect to local DP. I am partially convinced about this claim. It is clear that performing a secure linear transformation can lead to a significant privacy amplification. However, I am not convinced that the utility analysis of Section 4 completely characterizes this gain. The term $\alpha_{S}$ which impacts the accuracy of the protocol is only analyzed order-wisely, while it should be more concretely quantified. Therefore I am not sure about the magnitude of the gain. - As a second claim, the paper argues that the LTM is more convenient than the Shuffle Model in terms of computation and communication cost. I am convinced by the authors in this point. Methods And Evaluation Criteria: Empirical evaluation criteria seems reasonable. Theoretical Claims: I have checked the proofs of Theorems 3.4 and 3.5, which seem correct. Experimental Designs Or Analyses: In general the paper seems experimentally sound. However, some parts of it are hard to follow so I am not sure that all the approximation error that impacts the accuracy is included in the comparison (see the previous comments on $\alpha_{S}$). Supplementary Material: I have reviewed appendices A, B, D and G. Relation To Broader Scientific Literature: I think the paper is adequately related to broader scientific literature. The paper positions fairly well with respect to shuffle DP. However, certain statements seem unfair. For example, at the beginning of Section 3 the paper argues that the shuffle model is weaker because it assumes that all parties are honest and do not collude with the adversary. This seems easy to fix in the shuffle model: in the presence of corrupted parties, one can only consider the honest subset of parties to be part of the effective shuffle. Essential References Not Discussed: To the best of my knowledge, there are no essential references missed in the paper. Other Strengths And Weaknesses: As said before, the paper is in general hard to follow and theorems 3.4 and 3.5 could be more clearly stated. In mathematical statements, some conditions rely on parameters that are not previously defined. For example, $n$ depending on $m$ in the randomizer function of Sec 3.2 Other Comments Or Suggestions: Table 1 ignores the effect of $\alpha$ for space. However I think it is important to find a strategy to concisely include this effect. Questions For Authors: Please address my concerns with respect to the characterization of the error induced by $\alpha_S$ both in theory and in the empirical evaluation. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and questions. > I am not convinced that the utility analysis of Section 4 completely characterizes this gain. The term $\alpha_S$ which impacts the accuracy of the protocol is only analyzed order-wisely, while it should be more concretely quantified. Regarding the concrete quantification of $\alpha_S$, we are unsure if the reviewer means the possible values of $\alpha_S$ that can be chosen and the concrete constants in our analysis. If the question’s focus is on the constants in our theorem statements, for low rank approximation the constant leading $\alpha_S$ is 1. For linear regression, we give the constants resulting from our analysis in Equation 12 in the appendix for linear regression. However, the final constants depend on the analysis of the chosen sketching constructions, which is taken from previous work. The target dimension of sketching constructions of related work from Table 3 have been analyzed up to absolute constants, but the absolute constants themselves are not known. If the focus of the question is rather on the exact value of $\alpha_S$, we remark that $\alpha_S$ can be chosen to be any number in (0,½), so one can trade off multiplicative and additive error to get a best possible tradeoff. > Experimental Designs Or Analyses: I am not sure that all the approximation error that impacts the accuracy is included in the comparison. (see the previous comments on $\alpha_S$). Regarding the value of $\alpha_S$ in the experiments, it is not possible in general to experimentally determine which part of a cost function comes from a multiplicative error and which comes from an additive error. We can use the worst case bounds as a baseline, but these might not be tightly analyzed. The results presented in the experiments implicitly set the multiplicative error to a small constant and measure the excess risk, which is the closest approximation of a possible additive error. > Table 1 ignores the effect of $\alpha$ for space. However I think it is important to find a strategy to concisely include this effect. The comparisons in Table 1 are with respect to additive error, since there is no multiplicative error in the central model. Multiplicative errors and additive errors cannot be fairly compared to each other, so we simply remark on the existence of our approach’s multiplicative error in the Table caption. These errors are analyzed in the proofs of the theorems in Section 4. > At the beginning of Section 3 the paper argues that the shuffle model is weaker… We apologize for the confusion caused by the statement in Section 3 — this was not our intention. Our point was not to argue that the shuffle model is inherently weaker, but rather to highlight that the standard formalization of differential privacy in the shuffle model does not, as stated, account for the case of colluding clients. Our motivation for introducing a refined definition was simply to ensure that the privacy guarantee holds even when a bounded number of clients might collude with the adversary, as we believe this is important for the applicability of the definition to real world scenarios. We fully agree with the reviewer that this refinement can naturally be applied to both the shuffle model and the LTM, and our comment should not be interpreted as a point of comparison between the two models. It is solely about the formalization of the security notion itself. As noted in the paper, similar observations have already been made in prior work (e.g., Talwar et al., 2023), fully in line with the reviewer’s comment. We will make sure to revise the text to clearly reflect this intention. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for your thoughtful reply. My concerns about the impact of $\alpha_{\mathbf{S}}$ have been clarified. I still think that the utility is not fully characterized given that the bounds have $(1-\beta)$ success probability. However, it seems that the impact of failure is small as the process can be repeated, probably modifying the accuracy negligibly. I think that completely integrating the effect of this failure would be a valuable addition to the paper. Nevertheless, it is not a strong concern so I will update my score supporting acceptance.
Summary: The authors present a new model for differential privacy, the linear transformation model (LTM). This model interpolates between the local model of DP, that does not require a trusted central server, and the central model of DP that does. Some other intermediate models have been proposed, the most well studied being the shuffle model. The LTM works by having each user perform a local calculation on their data then add noise to it and report it to an intermediate server. This server collects all the perturbed data from the users then applies a linear transformation to it. This transformed data is then processed in order to solve some task (such as low rank approximation). This is analogous to the shuffle model, replacing a shuffle with a linear transformation. The natural choice for a transformation is a linear sketch. The authors show that this linear sketch can be computed securely with techniques from multi-party communication (MPC). The shuffle model can also be implemented via MPC, but the implementation is less practical because it is more computationally expensive and includes a chain of communication between servers increasing latency. The authors show that under this new model, several tasks from numerical linear algebra can be solved by algorithms that satisfy LTM-DP while having error that does not scale with the number of clients. The specific tasks in question are Low-Rank approximation and Ridge regression. Experiments demonstrate that the LTM algorithm for ridge regression gives lower error than the local-DP algorithm. Claims And Evidence: Most of the claims in this work are theoretical - these are addressed in the theoretical claims section. The claim that LTM algorithms will give lower error than local-DP algorithms is also supported by experiments on real and synthetic data. These experiments corroborate the theory. One claim with fairly weak support is that the MPC procedures needed for the shuffle model are much more expensive than those needed for the LTM. This may be obvious to someone with more MPC background, but it would be good to have this justified more quantitatively either with asymptomatic expressions for the communication/computation needed for these protocols or with experiments. Methods And Evaluation Criteria: The experimental methods and evaluation criteria are sensible given the theoretical nature of the claims. Theoretical Claims: The main theoretical claims are the privacy and utility results of the ridge regression and low-rank approximation algorithms. The tools used in this analysis are clear (infinitely divisible noise distributions and oblivious sparse norm-approximating projections) and the results are broadly in line with what I would expect given the combinations of these tools. However, I did not have time to carefully review the proofs to check their validity. Experimental Designs Or Analyses: One modification that would be informative would be a plot of error versus privacy budget for the various methods. The privacy budgets used in the paper are 0.1 and 0.5, which are fairly narrow. Supplementary Material: I reviewed the experiments in the supplementary and the related works section there. I did not review the proofs in the appendix. Relation To Broader Scientific Literature: This work is most closely related to other intermediate DP definitions such as the shuffle model and secure aggregation (which is an instance of LTM). As far as I know, the shuffle model has not been implemented much in practice. The LTM seems promising in that it is clearly stronger than the central model while not being as restrictive as the local model and being easier to implement than the shuffle model. Essential References Not Discussed: I do not believe that there are essential references not discussed in this paper. Other Strengths And Weaknesses: Strengths I find the writing of this paper to be very clear and informative. This manuscript does a great job of giving the reader the necessary background about differential privacy models, linear sketching, and multi-party communication The connection between LTM and linear sketching is very natural and I expect that many tasks from numerical linear algebra will be a good fit for this model The experiments on real datasets give me confidence that these methods will be useful in practice Weaknesses The sketch parameters are underexplored, it seems natural to ask if there is some relationship between the optimal sketch parameters for a particular privacy budget or number of clients Other Comments Or Suggestions: No other comments Questions For Authors: You mention that the MPC protocols needed for shuffle are not practical while the MPC protocols for the LTM are. Can you explain this quantitatively, how much more communication/computation is needed for MPC shuffle? What other tasks do you expect to be a good fit for the LTM? Do you expect that the LTM will be a strict upgrade over the shuffle algorithm or will there be tasks less related to numerical linear algebra that will be solvable with less error under the shuffle model. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and thoughts. > One modification that would be informative would be a plot of error versus privacy budget for the various methods. The privacy budgets used in the paper are 0.1 and 0.5, which are fairly narrow Regarding the choice of privacy budgets in the experiments, we chose to include results for $\epsilon=0.1$ and $\epsilon=0.5$ as those show a clear separation in the error between different mechanisms in the LTM. Note that increasing $p$ while keeping $n$ fixed can be interpreted as increasing the privacy budget of our Gaussian noise mechanism, so Figure 2 shows that an increase of privacy budget above 0.5 decreases the error drastically in all settings. We will include a plot that explicitly shows the error versus privacy budget in the final version. > The sketch parameters are underexplored, it seems natural to ask if there is some relationship between the optimal sketch parameters for a particular privacy budget or number of clients With respect to the sketch parameters, we would like to point out that one strength of the LTM is its modularity, in the sense that any sketch can be used when suitable for the data analytics task at hand. We explore different such sketches from the literature and list their parameters in Table 3 in the appendix. > You mention that the MPC protocols needed for shuffle are not practical while the MPC protocols for the LTM are. Can you explain this quantitatively, how much more communication/computation is needed for MPC shuffle? We acknowledge that we did not sufficiently substantiate the claim that MPC-based shuffles are more expensive than the LTM-based approach for readers who are not familiar with MPC. Due to space constraints, we could not include a full quantitative comparison, but we clarify here the key differences: 1. Round Complexity: In MPC-based shuffles, each server must sequentially shuffle and randomize (or re-encrypt) the ciphertexts, leading to a round complexity proportional to the number of servers involved. This sequential dependency significantly increases the protocol’s latency. In contrast, the LTM enables all servers to perform their computations in parallel, resulting in lower round complexity and latency. 2. Computational Overhead: In the LTM, servers only need to perform simple linear operations on secret shares, such as matrix-vector multiplications, which are highly efficient. On the other hand, MPC-based shuffles typically rely on mix-nets or similar constructions, where each server performs, for every data point, multiple expensive public-key operations (e.g., modular exponentiations with large moduli and exponents). This introduces an overhead that is orders of magnitude higher compared to simple matrix multiplications. 3. Communication Complexity: The total amount of data transmitted is comparable between the two models, but the size of individual messages differs substantially. In the LTM, secret shares can be as small as the plaintext itself; for example, when working with 64-bit types, each share is just 64 bits per server. In contrast, MPC-based shuffles transmit ciphertexts that are significantly larger due to the inherent expansion of public-key encryption. For example, even with elliptic curve ElGamal, each ciphertext would be at least 512 bit. As a result, the total communication volume in MPC shuffles is larger. Finally, we emphasize that MPC protocols, in general, tend to incur significant communication and computational costs when dealing with non-linear operations (e.g., multiplications on secret-shared data). The LTM model avoids this overhead by relying exclusively on linear operations on secret-shared data, which are much cheaper to compute. > What other tasks do you expect to be a good fit for the LTM? Do you expect that the LTM will be a strict upgrade over the shuffle algorithm or will there be tasks less related to numerical linear algebra that will be solvable with less error under the shuffle model. **Good Fit for the LTM:** Currently, we expect many problems beyond numerical linear algebra to be addressable in the LTM. A promising example are certain clustering tasks like k-median clustering, which we are currently working on. **Comparison to Shuffle:** We do not believe that the LTM will be an upgrade over the shuffle model, rather it is a complementary model with incomparable strengths and weaknesses. Proving a separation between the two models would be interesting and potentially challenging. --- Rebuttal Comment 1.1: Comment: I think that a more detailed comparison would be very helpful for readers without the MPC background, and I hope that future versions of this manuscript find space to include that information in the main text or an appendix.
null
null
null
null
null
null
M³HF: Multi-agent Reinforcement Learning from Multi-phase Human Feedback of Mixed Quality
Accept (poster)
Summary: In this work, the authors introduce a technique for training agents in MARL settings using human feedback as a substitute for hand-designed reward functions. Specifically, their training pipeline involves (1) collecting human language feedback about a rollout video after a period of training, (2) using an LLM to design a reward function based on the feedback and reward templates, and (3) adjusting weights of reward functions to improve training. They validate their training on Overcooked and find significant improvements in final performance relative to existing MARL baselines. ### Update After Rebuttal The rebuttal clarified my original confusion between MAPPO and IPPO, and I am satisfied with the additional clarifications regarding theoretical assumptions and proofs. Claims And Evidence: Claims are generally supported, though I listed specific issues with some claims in later sections of the review. Methods And Evaluation Criteria: Yes, the proposed method and benchmark makes sense for this problem. Theoretical Claims: - Proposition 4.1 is valid but irrelevant to the game studied in this setting since the Markov Game is not ergodic. Specifically, we know Overcooked is not ergodic since the “timestep” feature changes at each step (and periodic episode resets violate aperiodicity). - I generally felt that the section on “Approximation of Policy Performance via Rollouts” in Section 4.1 was unnecessary and unclear. In particular, it seems like the empirical distribution of states+actions in any single rollout trajectory won’t approximate the “true distribution” but this is easily solved by just having multiple rollouts. - Proposition 4.2 is based on the assumption that the noise is zero-mean for all states and actions, but this contradicts the premise that this noise is based on “misunderstanding, lack of expertise, or other factors” since the reward function generated by the LLM is deterministic. However, I think the proof still works if we drop the noise term and just state the “true” human feedback reward function is faulty. Experimental Designs Or Analyses: - The experimental results show that MAPPO consistently underperforms IPPO, which makes me question the hyperparameter tuning and network architectures used to generate the results. The only difference between MAPPO and IPPO (according to the original MAPPO paper) is the fact that MAPPO has centralized value function inputs, which was found to be broadly helpful. The only other possible difference is that parameter-sharing is used for one implementation but not for the other. I was not able to find details regarding the differences in MAPPO or IPPO in the main text or appendix, so this point needs to be clarified. - In general, there are other baselines that are simply not presented that attempt to solve the same problem of sparse extrinsic rewards (i.e. the techniques listed in the MARL section of the Related Work). Without comparing to existing baselines, it is unclear how helpful human feedback is relative to intrinsic rewards. Supplementary Material: I reviewed the appendix, specifically focusing on sections A-D Relation To Broader Scientific Literature: The key contribution of the work is bringing human feedback (though a code-gen LLM) within a MARL training loop as a form of reward shaping. This method appears to be novel to me. Essential References Not Discussed: The related works section is comprehensive. Other Strengths And Weaknesses: - The paper reads very well and the diagrams are very clear. Overall, I understand the intuition behind the techniques and why it helps improve performance. - I disagree with the claim that “designing appropriate reward functions” is a fundamental and significant challenge in MARL (beyond the challenge in single-agent RL). In particular, although naive self-play often converges to suboptimal equilibria, this is often due to the dynamics of learning compatible conventions, not fundamental flaws in the reward function. - One possible framing of this work is that human feedback helps align AI behavior to human conventions (though this may come at the cost of stronger self-play performance) - Despite the framing of the paper as a technique to help multi-agent RL, the technique has little to do with the multi-agent setting other than the fact that Overcooked was chosen as the setting of interest. A difficult single-agent setting could’ve been sufficient for demonstrating the pipeline. - Again, framing this as human-AI alignment and conducting human-AI user studies would’ve resolved this concern. - The “reward templates” require significant domain knowledge and seem to limit the applicability of the technique to new domains. Other Comments Or Suggestions: - Multiple typos of word “function” in Figure 1 - In Figure 2’s caption it seems like the descriptions of B and C are swapped? - The empirical occupancy measure in eq 8 is undiscounted, which means it does not approximate the discounted occupancy measure in eq 6. - “formal” should be “former” in line 306 - I don’t think “the” should be in the section title of 4.4 Questions For Authors: - “Long-term Exploration Failure” for rollout generation is a bit unclear. It seems like the experiments presented only ask humans for feedback every 200 iterations, so how does the long-term exploration failure fit into your final method? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Reply to Reviewer sgwG We thank the reviewer for recognizing our pipeline as a "novel" contribution and for the positive comments on clarity and presentation. We address your concerns below. --- ### 1. Regarding the Framing of our work Thanks for your thoughtful comments on our paper's scope and framing. We agree that human feedback plays a critical role in improving human-AI alignment. In our revision, we will clarify our motivations and explain how our work contributes to addressing key challenges for human-AI alignment. --- ### 2. Regarding Theoretical Assumptions > Proposition 4.1’s relevance: You are correct that Overcooked is not ergodic due to episode resets and the timestep feature. We included Proposition 4.1 primarily for theoretical completeness, highlighting that rollouts can approximate policy performance under idealized conditions. We will clearly state this limitation and clarify the scope of this proposition in the revision. > Proposition 4.2’s noise assumption: Indeed, as you correctly observe, the reward function generated by the LLM is deterministic, making the original assumption of zero-mean random noise less reflective of our practical setup. Your suggestion—dropping the noise term and explicitly treating the human-generated reward function as potentially faulty—is more consistent with our actual implementation. This perspective strengthens our conclusions, as our adaptive weighting approach does not fundamentally rely on randomness and remains effective even in the presence of deterministic errors. We will explicitly clarify this point in the revision. --- ### 3. Baselines and Intrinsic Reward Comparisons >MAPPO vs. IPPO performance: In our experiments, MAPPO employs a shared policy among agents with a centralized value function, while IPPO uses independent policies. We observed that IPPO often performs better in coordination-intensive scenarios like Overcooked, potentially due to reduced interference during training. Prior works [1,2] similarly report IPPO matching or outperforming MAPPO, even without centralized critics. We will clarify this in the revision. > Comparison to intrinsic reward methods: Our method is orthogonal to intrinsic reward approaches, as M3HF assigns human feedback to each agent’s reward function, effectively serving as a form of tailored intrinsic reward. We also agree that it is valuable to compare intrinsic reward baselines directly; thus, we choose IRAT [3], an intrinsic reward method that combines manually defined reward functions with the original task objective. Since Overcooked lacks pre-built intrinsic rewards, we implement three manually constructed reward variants (rw_1–3) based on increasing levels of agent coordination: * rw_1: rewards any pickup or chop of ingredients * rw_2: rewards reaching the knife after picking up ingredients * rw_3: rewards chopping after reaching the knife and picking up. | Iters. |400|600|800|1000| |-|-|-|-|-| | IPPO| 19.2 ±4.5| 23.1 ± 2.7|23.2 ± 3.3|27.4 ± 4.9| | IRAT-rw_1|68.9 ± 10.1| 52.5 ± 11.3 | 78.2 ± 14.5 | 94.9 ± 10.7 | | IRAT-rw_2|1.1 ± 2.1| 9.29 ± 11.4 | 16.0 ± 8.1 |34.5 ± 14.0| | IRAT-rw_3|10.8 ± 9.1|17.3 ± 10.6|21.3 ± 8.7|33.8 ± 9.9| | M3HF| **164.8 ± 1.2**|-|-|-| The table shows that IRAT variants perform better than vanilla IPPO, but they lag behind M3HF, especially in early training. This is because IRAT defines rewards before rollout, without observing actual policy behavior—often leading to coordination issues (e.g., agents crowding the same ingredient). In contrast, M3HF leverages post-rollout human feedback and reward assignment to precisely target behavioral failures, resulting in faster and more effective cooperation. --- ### 4. Domain knowledge in reward templates: Thank you for highlighting this limitation. While our templates rely on domain knowledge, they are adaptable. In the football environment, we reuse categories like distance-based and action-based rewards, adjusting only to the environment’s observation and action spaces. We will clarify this in the revision. --- ### 5. Additional Clarifications and Typos >Equation 8 (undiscounted occupancy) Thank you—we agree. We will revise Eq. (8) to include appropriate discounting weights for theoretical consistency. > Clarification on "Long-term Exploration Failure" In implementation, this mechanism rolls back to a prior policy and requests new feedback if learning stagnates. However, this scenario did not arise in our experiments. We will clarify this in the revision. > Typos We appreciate your attention to detail and will correct all mentioned typos and figure caption issues in the revision. --- * [1] C.S. de Witt et al., "Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge?", arXiv, 2020 * [2] Yu C. et al., "The Surprising Effectiveness of PPO in Cooperative Multi-Agent Games", ICLR, 2022 * [3] Wang L. et al., "Individual Reward Assisted Multi-Agent Reinforcement Learning", ICML, 2022 --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal! I understand the difference between the MAPPO and IPPO implementation and I'm satisfied with the new results regarding IRAT. To follow up on the theoretical assumptions - I personally feel it would be satisfactory to state that *multiple* independent rollouts could approximate policy performance (perfectly approximating as the number of rollouts approach infinity) instead of a single long trajectory. Is there a reason why this would not apply to your setting? - I'm satisfied with dropping the noise term and modifying the proof to show that the human feedback may be faulty (i.e. maybe rename R_true to R_human or something similar). As a proof, I think the impact of the weights needs to be formalized more. Additionally, when reading the rebuttals to the other reviewers, it seems a bit unclear how you are resolving the assumption of Gaussian noise (sometimes leaving it to future work); I think that pointing to this rebuttal and perhaps providing an updated proof sketch would be valuable to all of us. After reading the other reviews or rebuttals, I don't have major additional concerns. If there was time, I would've liked to see a comparison against PbMARL referenced in the rebuttal with 6vBu (i.e. an online version where you receive feedback after Gen 0 and only use that feedback), which would demonstrate the utility of multi-phase feedback and code-gen over the pairwise comparisons of PbMARL. Also, the link to the new football experiments (also referenced to 6vBu) just gives a blank pdf, so that needs to be updated. --- Reply to Comment 1.1.1: Comment: # Follow-up Reply Thank you for your follow-up. We are pleased to clarify as follows: --- ### Prop 4.1 Multiple rollouts are valid in our setting. We used a single rollout per human query to reduce annotation cost and theoretical simplicity, but M3HF supports multiple rollouts and has already been applied in Football. We will clarify this in the revision. --- ### Prop 4.2 Below, we provide a revised P4.2 and proof sketch under faulty human feedback. For clarity, we drop the agent index $i$. At generation $k$, we have the combination rewards as: $$\widehat{R} _{k} = \sum _{m=0}^{k} w _{m}^{k}R _{m},$$ with $R _0 = R _{ori}$ and $R _{m>0} = R _{human}$. Weights $w _m^k$ are updated per Eq. 14–16. > Proposition 4.2 (Revised): Under Assumption A.3 (Performance Estimation Accuracy) and A.2 (Learning Algorithm Convergence) with $\epsilon=0$ (exact convergence), for any $K\ge 1$ and arbitrary feedback reward sequence $(R_k) _{k=1,2,\dots,K}$, the following inequality holds: $$J _{\mathrm{ori}}(\pi _{K}) - J _{ori}(\pi _{0}) \ge \sum _{j=1}^{n(K)} \Delta r _{i _j} - \delta,$$ where $\delta$ is a bounded positive constant independent of $K$, and $i _j$ represents the index at which the feedback reward is helpful for the $j$-th time. > Proof Sketch Let $i_j$ denote the index at which $\Delta r_k > 0$ occurs for the $j$-th time, and let $n(K)$ represent the index of the last occurrence of $\Delta r_k > 0$. We aim to show that for any $k$ satisfying $i_{j-1} \leq k < i_j$, the following inequalities hold: $$J(\pi_{k}) - J(\pi_0) = \sum_{l=1}^{j-1} \Delta r_{i_l} > 0, \quad k = i_{j-1},$$ $$J(\pi_{k}) - J(\pi_0) \ge \sum_{l=1}^{j-1} \Delta r_{i_l} - \delta, \quad \forall i_{j-1} < k < i_j.$$ The proof proceeds as follows: 1. The first equality is immediate by definition of $i_j$ and the algorithm: $$J(\pi_{i_{j-1}}) - J(\pi_0) = \sum_{l=1}^{j-1}\left(J(\pi_{i_l}) - J(\pi_{i_{l-1}})\right) = \sum_{l=1}^{j-1}\Delta r_{i_l}.$$ 2. For indices $k$ satisfying $i_{j-1} < k < i_j$, since $\Delta r_k < 0$, the weight assigned to the new reward function $R_k$ is clipped to zero by Eq.16. Thus, weights of previous rewards remain unchanged by Eq.14-15. Consequently, the policy update satisfies $J(\pi_k)-J(\pi_{i_{j-1}}) \ge -\delta$, where $\delta$ is a positive bounded constant. 3. Therefore, for $i_{j-1} < k < i_j$, we have: $$J(\pi_k)-J(\pi_0) = \sum_{l=1}^{j-1}\Delta r_{i_l} + J(\pi_k)-J(\pi_{i_{j-1}}) \ge \sum_{l=1}^{j-1}\Delta r_{i_l} - \delta.$$ Since $n(K)$ is the last index with $\Delta r_k > 0$, we conclude: $$J(\pi_K)-J(\pi_0) \ge \sum_{j=1}^{n(K)} \Delta r_{i_j} - \delta,$$ as desired. > Further bound on $\delta$ **Lemma** Let $r _{1}, r _{2}, r _{3}$ be three reward functions. Define $$R = (1-p)\,r _{1} + p\,r _{2}, \quad\text{and}\quad R' = (1-p'-q)\,r _{1} + p'\,r _{2} + q\,r _{3}.$$ Let $\pi$ be an optimal policy for $R$, and let $\pi'$ be an optimal policy for $R'$. Then, for any $\gamma \in (0,1)$, \\begin{align*} V _{r _{1}}^{\pi} - V _{r _{1}}^{\pi'} &\le \frac{2}{1-\gamma}\,\|\,R - R'\| _{\infty} \\\\ &\leq \frac{2}{1-\gamma} [ |p' + q - p| \|\,r _{1}-r _{3}\| _\infty + |p - p'| \|\,r _{2}-r _{3}\| _\infty ]. \\end{align*} Let $\pi _{k}$ be the optimal policy for receiving the new bad reward $R_k$, then we aim to bound $V _{R _{ori}}^{\pi _{k}}-V _{R _{ori}}^{\pi _{i _j}}$, where $i _j$ is the last time we receive the good rewards. We can derive the worst case bound on $\delta$ by substituting $r _1=R _{ori}$, $r _2=$ weighted combination of good feedback rewards, and $r_3=$ new bad reward in the above lemma: $$ V _{R _{ori}}^{\pi _{k}}-V _{R _{ori}}^{\pi _{i _j}} \leq \frac{2}{1-\gamma} [ |p' + q - p| \|\,R _{ori}-R _k\| _\infty + |p - p'| \|\,r _{2}-R _k\| _\infty ].$$ Notice that $q = (1/(2+j))/(1+1/(2+j))=1/(3+j)$, and $p-p'\leq p-\alpha^j p =(1-\alpha^j)p$, thus we have $$ \delta \leq \frac{2}{1-\gamma}[|1/(3+j)-(1-\alpha^j)p|\|\,R _{ori}-R_k\| _\infty + (1-\alpha^j)p \|\,r _{2}-R _k\| _\infty ].$$ **Intuitively, P4.2 indicates that the algorithm benefits from each high-quality reward (each yielding a positive increment $\Delta r_{i_j}>0$), while its performance can degrade at most once, corresponding to the last received faulty reward.** --- ### PbMARL PbMARL targets a fundamentally different offline setting: it aims to identify Nash equilibrium from a large offline dataset of pairwise preferences (e.g., 960 comparisons even in simple Overcooked), typically generated by a simulated policy. Collecting such data from real humans would be extremely costly. In contrast, M3HF requires only 5 rounds of human feedback during training. However, preference-based feedback could also be integrated into M3HF, feedback like “A performed better than B” could guide B to imitate A’s behavior via our reward templates. --- ### Football The link has been fixed. **We appreciate your follow-up. If our responses hopefully addressed your concerns, we’d be grateful for your support in raising the score.**
Summary: This paper addresses the challenge of designing effective reward functions in multi-agent reinforcement learning for complex, cooperative tasks with sparse or misaligned rewards. This paper proposes M³HF, a framework that integrates multi-phase human feedback of mixed quality into MARL by extending the Markov Game to include iterative human guidance. This paper provides a way of leveraging large language models to parse feedback into agent-specific reward functions using predefined templates (e.g., distance-based or action-based rewards) and adaptively weighting adjustment mechanisms that balance new feedback against prior rewards via decay and performance-based updates. Claims And Evidence: Proposition 4.2 assumes zero-mean noise in feedback, but real human errors can be biased (e.g., consistently incorrect advice), which is not addressed. The theoretical analysis does not account for systematic human errors or adversarial inputs. Methods And Evaluation Criteria: Yes Theoretical Claims: Propositions 4.1 and 4.2 seem correct, but not checking carefully. Experimental Designs Or Analyses: While Table 3 lists hyperparameters, there is no indication that baselines were retuned for fairness. Experiments use only three random seeds (e.g., in Figure 3 and Tables 1-2), which is insufficient for MARL, where high variance is common. Supplementary Material: N/A Relation To Broader Scientific Literature: Related work on multi-agent reinforcement learning, reinforcement learning from human feedback, and multi-phase human feedback are provided. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Theoretical analysis claims to demonstrate robustness to noisy feedback, and the experimental results in Overcooked environments seem to show M³HF outperforms baselines (IPPO, MAPPO) by up to 50% in complex tasks, achieving faster convergence and higher asymptotic performance. Other Comments Or Suggestions: N/A Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Reply to Reviewer 7Tji We thank the reviewer for agreeing that our "theoretical analysis demonstrate robustenss to noisy feedback," and that the experimental results demonstrate "faster convergence and higher asymptotic performance." > Proposition 4.2 assumes zero-mean noise in feedback, but real human errors can be biased (e.g., consistently incorrect advice), which is not addressed. The theoretical analysis does not account for systematic human errors or adversarial inputs. Thank you for your insightful comment. You raise an important point about biased human errors. As Reviewer sgwG also observed, our original zero-mean noise assumption was made primarily for theoretical tractability. We recognize that real human errors (e.g., consistently incorrect advice) may indeed be systematic or biased. However, our adaptive weighting mechanism inherently mitigates such systematic errors by continually reducing the influence of consistently detrimental feedback, as supported by our empirical results (Figure 4). Explicitly modeling and analyzing biased or adversarial feedback is indeed a valuable future direction, and we plan to address it in subsequent research. > While Table 3 lists hyperparameters, there is no indication that baselines were retuned for fairness. We performed hyperparameter tuning for all baselines to ensure fair comparison. Specifically, for IPPO and MAPPO, we tuned learning rates, batch sizes, and gradient clipping values. For example, we systematically searched over learning rates in {3e-4, 1e-4}, #sgd_iters in {5, 10}, sgd_batch_size in {1024, 5120}, and entropy coefficient in {0.01, 0.05}, ultimately selecting the configurations with the best validation performance. For the macro-action based baseline, we directly adopted the best-performing hyperparameters reported in Mac-based method [1]. We will explicitly include these details in our revised manuscript. > Experiments use only three random seeds (e.g., in Figure 3 and Tables 1-2), which is insufficient for MARL, where high variance is common. Thank you for raising this point. While three seeds are commonly used in MARL studies, and our current results already show clear and consistent improvements, we agree that additional seeds could further strengthen the statistical confidence. We are currently running more seeds, which will be included in our revisions. --- ### Reference [1] Xiao, Y et al., "Asynchronous actor-critic for multi-agent reinforcement learning." NeurIPS, 2022.
Summary: The paper introduces a novel framework named M3HF (Multi-phase Human Feedback for Multi-agent Reinforcement Learning), designed to address the challenges of sparse or complex reward signals in multi-agent reinforcement learning (MARL) by incorporating multi-phase human feedback, including feedback of varying quality. The M3HF framework extends Markov games to incorporate human inputs and leverages large language models (LLMs) to parse and integrate human feedback, enabling agents to learn more effectively. Claims And Evidence: The claims about overall performance and resilience to mixed-quality feedback appear well-supported by the experimental results. However, the claim regarding the effectiveness of VLMs as an alternative to human feedback might require further substantiation with more comprehensive performance comparisons. Gemini-1.5-Pro-002 was used to generate feedback based on video rollouts similar to those observed by humans in the experiments. While an example of VLM-generated feedback is mentioned, detailed comparative performance metrics between human and VLM feedback are not fully elaborated upon in the given excerpts. Therefore, while the concept is promising, the current evidence may be insufficient to conclusively support the claim that VLMs can serve as a scalable and effective alternative without additional data demonstrating comparable or superior performance metrics. Methods And Evaluation Criteria: 1. Complex Reward Structures: The M3HF approach aims to enhance cooperation among agents in challenging reward environments by introducing mechanisms such as performance-based weight adjustments and weight decay to mitigate the impact of low-quality feedback. This is particularly important in multi-agent systems where the complexity of interactions and potential conflicts between agents' objectives can make reward design difficult. 2. Experimental Validation: The effectiveness of M3HF is validated through experiments conducted in a complex multi-agent environment based on the game Overcooked. In this setting, agents must learn to cooperate to prepare the correct salad and deliver it to a designated location. This scenario serves as an excellent testbed because it requires coordinated actions to achieve goals, which is a common challenge in multi-agent systems. 3. Benchmark Comparisons: The study compares M3HF with several strong baseline methods, including MAPPO, IPPO, and macro-action-based baselines. These comparisons demonstrate M3HF's consistent superiority across different environments and recipe setups, especially its ability to quickly converge to optimal strategies in simpler recipe scenarios. This indicates that the proposed method significantly improves learning efficiency and final performance. 4. Exploration of VLMs as an Alternative to Human Feedback: While not fully detailed, exploring Visual-Language Models (VLMs) as scalable and effective alternatives to human feedback is a promising direction, especially considering ways to reduce dependency on human input. However, this claim would benefit from further substantiation with specific data comparing the performance of feedback generated by VLMs versus human feedback. Theoretical Claims: I have roughly checked the proofs, and they appear to be correct with the conclusions likely valid. However, I did not meticulously verify the details of the proofs. In my opinion, the theoretical aspect is not critically important because the assumption of Gaussian distribution seems too strong and significantly deviates from reality. Experimental Designs Or Analyses: 1. **Performance Across Different Environments**: The study evaluates M3HF in various Overcooked layouts (e.g., Layout C), demonstrating its effectiveness and adaptability. This is a robust approach as it tests the framework under different complexities, ensuring that M3HF can perform well in diverse scenarios. The comparison with backbone algorithms like IPPO shows consistent superiority, which supports the claim of M3HF's effectiveness. 2. **Impact of Mixed-Quality Human Feedback**: Experiments were conducted to assess how M3HF handles feedback of varying quality. An example given is where agents exhibited suboptimal behavior due to poor coordination, yet the system maintained performance close to baseline algorithms even when receiving inaccurate or unhelpful feedback. This experiment demonstrates the robustness of M3HF against noisy feedback through its weight adjustment mechanisms. The analysis appears sound, showing that the method effectively mitigates the impact of low-quality human input. 3. **VLM-Based Feedback Generation**: The potential of Vision-Language Models (VLMs) as an alternative to human feedback was explored. Gemini-1.5-Pro-002 was used to generate feedback based on video rollouts. While this concept is promising, the provided information does not fully elaborate on how VLM-generated feedback compares to human feedback in terms of performance. More detailed comparative metrics are needed to substantiate claims about the scalability and effectiveness of VLMs as a substitute for human feedback. 4. **Robustness Analysis to Noisy Feedback**: Theoretical analysis and stochastic approximation theory were employed to analyze the robustness of M3HF under noisy human feedback. Proposition 4.2 suggests that zero-mean noise does not degrade the expected performance over time, supported by empirical evidence showing minimal performance drops under mixed-quality feedback conditions. This theoretical foundation adds credibility to the experimental findings but could benefit from more comprehensive empirical validation across different noise levels and types. Supplementary Material: I reviewed the assumptions of the propositions, the proofs of the propositions, and the pseudocode of the algorithm presented in the document. Relation To Broader Scientific Literature: 1. Multi-phase Human Feedback: Earlier works such as Yuan et al. (2022) and Sumers et al. (2022) have considered the role of iterative and multi-phase human feedback in reinforcement learning, but these approaches often rely on predefined communication protocols or need human demonstrations that can be restrictive. In contrast, M3HF allows for more flexible and dynamic adjustments to the importance of feedback, accommodating varying qualities and phases of human input. 2. Reinforcement learning from human feedback. While RLHF has been successfully applied to train Large Language Models (Ouyang et al., 2022; Shani et al.,2024), these approaches primarily focus on aligning LLM outputs with human preferences through single-turn interac- tions and scalar reward signals. M3HF differs by incorporating multi-phase, mixed-quality human feedback directly into the reinforcement learning loop of agents in a multi-agent environment. 3. Language Models in Reward Design and Policy Learning. Previous works are limited in single-agent setting, while this one focuses on multi-agent setting. Essential References Not Discussed: To the best of my knowledge, this paper provides a sufficiently thorough discussion of all closely related works. Other Strengths And Weaknesses: ### Strengths 1. **Originality and Innovation**: - The paper presents an innovative approach by introducing the M3HF framework, which creatively designed to address the challenges of sparse or complex reward signals in multi-agent reinforcement learning (MARL) by incorporating multi-phase human feedback, including feedback of varying quality. 2. **Significance**: - By addressing challenges related to sparse or complex reward functions and offering solutions that improve learning efficiency and performance, the proposed method could lead to more robust and adaptable AI systems. - The exploration of Vision-Language Models (VLMs) as an alternative to human feedback introduces a scalable solution for reducing dependency on human input, which is particularly important for large-scale deployment. 3. **Clarity**: - The clarity of presentation is commendable, with detailed descriptions of experimental setups, comparisons with baseline methods, and theoretical underpinnings. The use of visual aids like figures and tables helps in understanding the performance metrics and outcomes. 4. **Real-World Application Potential**: - The application-driven nature of this ML paper is evident through its focus on practical issues such as handling mixed-quality feedback and the potential for using VLMs to reduce reliance on human input. These aspects are crucial for deploying AI systems in real-world settings where human resources may be limited. ### Weaknesses 1. **Assumptions and Generalizability**: - While the assumption of Gaussian distribution facilitates theoretical analysis, it might limit the generalizability of the findings to real-world scenarios where data distributions can be significantly more complex and varied. 2. **Depth of Analysis on VLM Feedback**: - Although the idea of using VLMs for generating feedback is promising, the depth of analysis and empirical evidence supporting this aspect is currently limited. More comprehensive comparative studies between human and VLM-generated feedback would strengthen the claims made about the scalability and effectiveness of VLMs. Other Comments Or Suggestions: Page 11 row 594 the equation exceeds the restriction of the length Questions For Authors: The paper mentions that visual-language models (VLMs) currently lack specificity in critical feedback areas, offering vague suggestions like "improve coordination". Could you elaborate on how this limitation impacts the overall performance of M3HF and what steps might be taken to mitigate these issues in future iterations? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Reply to Reviewer PGeQ We thank the reviewer for acknowledging our M3HF framework as an "innovative approach" with "real-world application potential" and for recognizing its "originality and innovation" in addressing "challenges of sparse or complex reward signals in multi-agent reinforcement learning." --- > Question 1 and Weakness 2: "The paper mentions that visual-language models (VLMs) currently lack specificity in critical feedback areas, offering vague suggestions like "improve coordination". Could you elaborate on how this limitation impacts the overall performance of M3HF and what steps might be taken to mitigate these issues in future iterations?" Our experiments (Figure 4b-c) indicate that while VLMs can produce human-like feedback, their current lack of specificity impacts their utility in complex coordination scenarios, leading to performance gaps compared to precise human feedback. To mitigate this, future work includes: (1) fine-tuning VLMs on specialized multi-agent tasks, (2) enhancing their reasoning capabilities to identify and provide actionable coordination improvements, and (3) exploring hybrid systems that combine VLM-generated feedback with minimal human refinement, aiming to balance scalability and effectiveness. --- > Weakness 1: Assumptions and Generalizability We acknowledge that assuming Gaussian-distributed noise facilitates theoretical analysis but may not fully capture real-world complexities. However, our empirical results have shown robust performance across varied conditions, and a future research direction could be to relax this assumption under more realistic feedback distributions.
Summary: This paper introduces M3HF, a framework for integrating multi-phase human feedback of varying quality into multi-agent reinforcement learning (MARL). The authors propose a Multi-phase Human Feedback Markov Game (MHF-MG) that extends standard Markov Games to incorporate iterative human guidance. The framework uses large language models (LLMs) to parse human feedback, converts it into structured reward functions through predefined templates, and employs adaptive weight adjustment mechanisms to handle feedback of mixed quality. The authors provide theoretical analysis justifying their approach and demonstrate empirical results in the Overcooked environment, showing that M3HF outperforms several baseline methods in complex coordination tasks. Claims And Evidence: The paper claims that M3HF significantly outperforms state-of-the-art MARL methods by leveraging human feedback across multiple training phases. The evidence provided includes performance comparisons in the Overcooked environment with varying layouts and recipe complexities. The results do show consistent improvements over the baselines (IPPO, MAPPO, and macro-action-based methods). However, the evidence is somewhat limited in scope. The experiments are confined to a single environment (Overcooked) with variations, and the performance gains could be attributed to the additional information provided by human feedback rather than the specific mechanisms of the M3HF framework. The ablation studies help address some of these concerns by showing the value of LLM parsing and weight adjustment, but more diverse environments would strengthen the claims. Methods And Evaluation Criteria: The proposed methods are generally sound for the problem at hand. Using LLMs to parse human feedback and convert it into structured reward functions is a reasonable approach, and the weight adjustment mechanism to handle mixed-quality feedback is well-motivated. The evaluation criteria focus on the average episode return in the Overcooked environment, which is appropriate for measuring task performance. The authors compare against relevant baselines and include ablation studies to isolate the effects of different framework components. However, the paper lacks clear metrics for evaluating the quality of human feedback and how effectively it is incorporated into the learning process. Additionally, there is limited discussion of the computational overhead introduced by the LLM parsing and reward function generation. Theoretical Claims: The paper presents two theoretical propositions: one justifying the use of rollout-based performance estimates and another analyzing the framework's robustness to noisy human feedback. The proofs appear sound, though they rely on several assumptions that may not always hold in practice, such as the ergodicity of the Markov chain induced by the policy and the zero-mean nature of the noise in human feedback. Experimental Designs Or Analyses: The experimental design is generally sound, with appropriate baselines and ablation studies. Supplementary Material: The supplementary material provides additional details on the environment settings, implementation details, and proofs of the theoretical propositions. The prompts used for the LLM parsing are particularly helpful for understanding the practical implementation of the framework. Relation To Broader Scientific Literature: The paper positions itself within the literature on MARL, reward design, and learning from human feedback. While the authors provide a reasonable overview of related work, the novelty of their contribution is somewhat limited. Personally speaking, using LLMs for automated reward design based on human feedback has become quite common in recent literature. Essential References Not Discussed: A significant omission is the lack of comparison with other methods that incorporate human feedback into MARL, such as "Multi-Agent Reinforcement Learning from Human Feedback." This work addresses a similar problem space, and a direct comparison would provide valuable context for understanding the unique contributions of M3HF. Given that human feedback provides additional information not available to the baseline methods, it would be important to compare against methods that also leverage this type of input to isolate the specific benefits of the M3HF framework. Other Strengths And Weaknesses: * The paper presents a well-structured framework (M3HF) that integrates human feedback into multi-agent reinforcement learning in a systematic way, addressing the challenge of reward design in complex MARL environments. * The authors provide a theoretical analysis of their approach, particularly regarding the robustness to noisy human feedback, which adds credibility to their method and helps explain why their approach works. * The experimental results demonstrate consistent performance improvements across different environments with increasing complexity, showing the scalability and effectiveness of the proposed method in challenging coordination tasks. Other Comments Or Suggestions: The paper has several limitations that significantly impact its contribution: 1. The innovation is somewhat limited. Using LLMs for automated reward design has become a common concept in recent literature. For instance, "Motif: Intrinsic Motivation from Artificial Intelligence Feedback" explores similar ideas. Unfortunately, the paper doesn't leverage this automation at scale, which would have been a more significant contribution. The core idea of using LLMs to parse human feedback and generate reward functions is incremental rather than transformative. 2. The practical implementation details raise concerns about the method's applicability. I do not clearly know how many training iterations are required or how many human feedback instances are needed throughout the process. This raises questions about whether the feedback data is used in an on-policy manner or can be reused off-policy. The requirement for humans to repeatedly provide feedback during training seems cumbersome and potentially impractical for real-world applications, especially if frequent interventions are needed. 3. The baseline comparison is inadequate. The paper fails to compare against other human-feedback-based MARL methods, such as "Multi-Agent Reinforcement Learning from Human Feedback." Without such comparisons, it's difficult to assess whether the performance improvements come from the specific approach proposed or simply from the additional information provided by human feedback. Since human feedback inherently introduces external knowledge, performance improvements without proper baselines could be considered trivial. 4. The weight adjustment mechanism, while theoretically justified, seems simplistic and potentially brittle in practice. The paper doesn't thoroughly explore how sensitive the performance is to different weight decay parameters or how the system behaves when feedback quality varies dramatically across generations. A more robust analysis of these aspects would strengthen the paper's contribution. Questions For Authors: * How many training iterations and human feedback rounds are typically required to achieve the reported results? This information would help assess the practical applicability of your approach. * Is the human feedback data used in an on-policy manner, or can it be reused across training iterations (off-policy)? This distinction has significant implications for the amount of human involvement required. * Did you compare M3HF with other methods that also incorporate human feedback, such as "Multi-Agent Reinforcement Learning from Human Feedback"? Such comparisons would help isolate the specific contributions of your framework from the general benefits of human knowledge. * Have you considered the scalability of your approach to larger multi-agent systems or more diverse environments? The current evaluation is limited to the Overcooked environment with three agents. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Reply to Reviewer 6vBu We sincerely thank the reviewer for your positive remarks on our framework’s structure, scalability, and consistent performance. We hope the following responses address your concerns. --- ### 1. Regarding the Novelty of Our Work While existing work on LLM-based reward design mainly focuses on **single-phase** and **single-agent** settings, our core novelty lies in jointly addressing the challenges of **multi-phase** human feedback and **multi-agent** reinforcement learning. The challenge in **multi-phase** feedback lies in integrating guidance of varying focus and intent across stages. For example, initial feedback may reflect general observations about teamwork, while later rounds provide more concrete corrections based on improved understanding of agent behavior. The **multi-agent** challenge involves assigning human feedback—whether agent-specific or team-level—to appropriate reward functions. For instance, when a user suggests “place ingredients in the center desk” in Overcooked, our method can propagate this suggestion to all relevant agents by shaping their respective reward functions accordingly. Our novelty was also noted by other reviewers: “human feedback within a MARL training loop” (Reviewer sgwG) and “an innovative approach to sparse rewards” (Reviewer PGeQ). --- ### 2. Regarding the Comparison with Other Methods The methods mentioned in your review—PbMARL and MOTIF—differ significantly from M3HF in both setting. * **PbMARL** (renamed from “MARLHF”) follows a different offline setting, using only pairwise preference comparisons from a pre-collected dataset generated by a simulated policy. In contrast, M3HF collects natural language feedback dynamically during training and supports diverse reward templates, enabling more flexible and scalable reward shaping. * **MOTIF** does not use any human feedback. It is designed for single-phase, single-agent settings and relies on LLM-generated intrinsic rewards. As such, it does not address challenges in multi-phase human-in-the-loop learning or multi-agent coordination. To the best of our knowledge, no prior work addresses human feedback in a multi-phase, multi-agent setting, leaving no directly comparable baselines. We therefore compare against strong backbones (MAPPO/IPPO) and SOTA coordination-focused methods (Mac-based). Additionally, Tab.1 includes controlled variants with modified feedback settings to further validate our framework’s effectiveness. --- ### 3. Regarding Practical Applicability Regarding **practical efficiency**, M3HF achieves strong sample efficiency with minimal human input. For example, in Overcooked-B: L-T-S, it reaches high performance within \~15k eps (\~600 iters, \~2–3 h of training) using just 2 rounds of feedback—whereas MAPPO/IPPO typically require 80k–100k eps (\~4k iters, \~12–18 h), and macro-action methods need 50k–75k eps (\~3k–4k iters, \~9–15 h). This represents a 3×–6× speedup. Empirically, each feedback query—including watching rollouts and composing responses—took about 3–5 mins, keeping the total human effort under 25 mins while saving several hours of training time. Regarding **feedback quality**, explicitly quantifying it is non-trivial—if a ground-truth metric existed, it could replace human input entirely. Instead, we use adaptive weighting to handle feedback variability, and we show through experiments (in Fig.4c and App. D) that our method remains robust even when some feedback is noisy or incorrect. --- ### 4. Regarding the Scalability of Our Method We extended our evaluation to Google Football 5v5, a complex multi-agent benchmark. M3HF continues to outperform standard MARL baselines with the multi-phased human feedback. Full environment details are provided in the figure caption. [Anonymous Link: For the Football Env Results](https://drive.google.com/file/d/1wdKthshHkqb7h5u9ko9q7X-8Qw1ubhw-/view?usp=sharing) --- ### 5. Regarding the Robust Analysis of Weight Adjustment We provide an empirical robustness analysis of the weight adjustment strategy in Sec.4.4, Sec.5 (Question 2), and App.D in our paper. Specifically, Fig.4c shows that M3HF maintains strong performance even under deliberately misleading feedback. Tab.1 compares M3HF with variants lacking weight adjustment or using fixed weights, confirming the advantage of our adaptive strategy. Tab.2 evaluates robustness under correct, partially incorrect, and fully misleading feedback, demonstrating resilience to varying feedback quality. --- ### 6. Q&A > **Q1: How many training iterations and feedback rounds are needed?** Empirically, M3HF saves 10–15 hours of training with at most 5 rounds of human feedback (1k iters/25k eps). > **Q2: Is human feedback used on-policy or off-policy?** Feedback is collected on-policy, directly influencing the next learning phase. > **Q3:** Comparisons with other methods Please check Sec.2 of this rebuttal. > **Q4:** Scalability of M3HF Please check Sec.4 of this rebuttal.
null
null
null
null
null
null
The Polynomial Stein Discrepancy for Assessing Moment Convergence
Accept (poster)
Summary: In Bayesian statistics, it is quite common to want to integrate some functions with respect to the posterior distribution (e.g. the mean). When the posterior is complicated, this has no closed form solution, and so practitioners often resort to using MCMC samplers or diffusion samplers. In the past decade, the Stein discrepancy was developed as a way to quantify the quality of a sample to a target distribution, even if the normalization constant of the target distribution was unknown. The typical choice of the Stein set is usually an RKHS associated with the Gaussian or inverse multiquadric kernel. The authors propose using a different Stein set here, i.e., a Stein set where the preimage is the vector space of all monomials with degree less than or equal to $r$. The authors argue the benefits are twofold: this can be more computationally efficient when $r$ and $d$ are relatively small compared to $n$ and in the case of the target is Gaussian, it focuses the discrepancy more on the similarity of the moments of the two distributions. They offer some theory showing that this holds in the of a Gaussian and they also illustrate the efficacy of this discrepancy on a many different examples commonly used in this literature. ## update after rebuttal Based on the authors feedback, I'm inclined to keep my score of a weak accept. I think there are good ideas in the paper, but there are still some lingering questions (choice of polynomial basis, lack of shift invariance, etc) which degrade the practicality of the method. Claims And Evidence: Yes, the claims in this paper are convincing. The main theoretical claim (Proposition 3.2) is proved in the appendix, and all other claims about its efficacy are demonstrated empirically. This is ample empirical work, which both displays the benefits of this approach, and also some areas where the Gaussian assumption of $P$ can go wrong. Methods And Evaluation Criteria: Yes, the proposed methods make intuitive sense for the problem at hand. The authors even show that the PSD can perform well for distributions that are bimodal (Figure 2), but that it can also fail for some non-Gaussian and heavy tailed targets (Appendix C.6). They also demonstrate both theoretically and empirically that for low $r$, this method is computationally more efficient than alternatives. Theoretical Claims: The authors show that PSD = 0 enforces moment matching up to order $r$ when $P$ is Gaussian. This illustrates that the PSD does at least what practitioners want in the most simple case. This was fully argued in Appendix B. Experimental Designs Or Analyses: The experimental analysis here seem valid. Most of them are borrowed from previous papers in the field and thus have been used as benchmarks before. Supplementary Material: Yes, I read through all the appendices. Relation To Broader Scientific Literature: The paper is motivated by tailoring the Stein discrepancy to be even more useful than it currently is for Bayesian analysis. Since often practitioners often want their MCMC samplers to approximate a Bayesian posterior in the mean and covariance, this discrepancy is more tailored to helping identify when that has gone awry. Essential References Not Discussed: The paper did a good job citing the relevant literature. One thing I might suggest is that in "Measuring sample quality with Stein’s method" paper, the graph Stein discrepancy actually does control convergence with respect to the Wasserstein metric, which implies convergence in mean. Other Strengths And Weaknesses: The main strength of this paper is its signficance in articulating an efficient discrepancy that can help identify differences in moments when the target distribution is approximately Gaussian. The idea of using a Stein set of monomials is not so novel, but the thorough empirical work and ability to work in a broad range of examples is quite remarkable. The simplicity of the method and clarity of the paper are also positives. There are some limitations to the paper. In the general theory of Stein discrepancies, one often defines the suitable test set $\mathcal{H}$ which controls convergence. E.g., in another version of the Stein discrepancy, this would be the set currently used as the span of monomials $\mathcal{G}$. Then one usually solves the Stein equation $\mathcal{A} g = h$ so they can guarantee there is always a $g\in\mathcal{G}$ that satisfies the Stein equation for each $h$ for some set $\mathcal{G}$. The authors here do implicitly show a version of this for multivariate Gaussian $P$, which is a notable feat. However, as discussed in the questions below, this does not naturally generalize to non-Gaussian distributions. The authors do present some solid evidence that the PSD can still perform well even for non-Gaussian $P$, but there are no robustness results to illustrate when the PSD might behave quite surprisingly. And while the point about the Bernstein-von Moses limit is well taken, it still doesn't provide much confidence how powerful this statistic is when using the PSD in the wild. Other Comments Or Suggestions: + A few places in the paper mention "Theorem 3.1" but in the paper this is written as "Corollary 3.1." + It would be nice if Figure 4 was a log/log plot so it was easier to see the different growth rates. Questions For Authors: [Q1] Under what conditions should a practitioner elect to use the PSD over the KSD (with say IMQ kernel)? As surfaced in the paper, the IMQ-KSD at least controls weak convergence (but not necessarily moments), whereas the PSD controls moments in the case when $P$ is Gaussian. If practitioners know that the posterior distribution is Gaussian, they can surely use the PSD, but how often do practitioners know this in advance? If practitioners know that the target is Gaussian, aren't there simpler ways to learn the moments of the target distribution? If the moment matching were guaranteed for a larger class of target distributions (e.g. log concave), then perhaps the validity of this would be more concrete. [Q2] The PSD given here is not translation invariant, i.e., if one shifts both P and Q by some vector $\alpha$, the PSD will have a different value. It appears that the implicit choice of the origin imposes different weights on each basis monomials of $\mathcal{G}$. How sensitive is the power of the PSD to this implicit parameterization? Should one try to center the distributions $P$ and $Q$ before using the PSD? [Q3] The authors show that if $P$ is a multivariate Gaussian and $PSD(P, Q)=0$, then the first $r$ moments of $P$ and $Q$ match. Can the authors say anything about the topology of the PSD, even in the case that $P$ is Gaussian? I.e., if $PSD(P, Q_n)\to 0$, does this imply that the first r moments of $Q_n$ must converge to $P$? [Q4] The authors mention that the PSD presented here is "offering a simpler formulation that may be more effective for identifying specific moments where discrepancies occur." Can the authors explain this in more detail? In the non-Gaussian case this seems non-trivial, and even in the Gaussian case, how would this look exactly? The differentiation operator and a non-identity covariance would result in some weighted sum of monomials, which are not exactly known without knowing the covariance. Are there any benefits to using a different set of spanning polynomials, e.g. the Chebyshev polynomials? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time reviewing our manuscript and for your valuable comments and suggestions. We will create a log/log plot, replace "Theorem 3.1" with "Corollary 3.1." and mention that property of the graph Stein discrepancy. **(Lack of) confidence about power of PSD in the wild** We have highlighted in the paper the strong theoretical performance on Gaussian targets and the strong empirical performance of PSD on non-Gaussian targets. During this review period, we have also highlighted issues with PSD for distributions where moments do not exist (e.g. Cauchy). However, the development of theoretical results for the non-Gaussian setting represents a substantial undertaking that would be better considered in a future paper. **When to use PSD over KSD** Broadly we suggest using PSD when one wishes to obtain high statistical power for detecting discrepancies in moments and the target is "close to Gaussian" (with checks that we have expanded on elsewhere). Fast but biased sampling methods like SG-MCMC, where the discrepancy is often in the variance and the target is often close to Gaussian, are good applications that we highlight in the paper. The linear-time complexity of PSD is particularly helpful for large $n$. **How do practitioners know the posterior is Gaussian?** We propose a diagnostic plot in the discussion. If the gradients are available in analytic form, rather than indirectly through automatic differentiation procedures, then one can also determine which expectations are being tracked by analysing the system of linear equations in equation (14) of Appendix B. We do this for the Rosenbrock target in Appendix C.6, and we are able to determine exactly which expectations are being tracked. We will suggest this technique in the discussion to diagnose situations where the PSD might not behave as expected. **Simpler ways to learn the moments if the target is Gaussian** Estimating the first $r$ moments need not be our only goal. However, checking for discrepancies in moments can be important in practice. For example, in the common application of SG-MCMC algorithms, poor step-sizes can lead to under- or over-estimated variance. Selecting the step-size based on matching the second-order moment can lead to a reasonable posterior approximation, as explained in the paper. **Moment-matching guarantees for a larger class of target distributions** We can guarantee PSD is assessing moments for Gaussian targets because $\nabla \log p(x)$ is a linear function in that context, so the Stein operator applied to an $r$th order polynomial is itself an $r$th order polynomial. In this sense, the closer $\nabla \log p(x)$ is to a linear function, the closer the PSD is to assessing discrepancies in the first moments. **Sensitivity of PSD to mean-shift** We are investigating this further by observing the sensitivity of PSD to transformations. The moment-tracking property of PSD will continue to hold due to similar reasoning as Appendix D. **Topology of PSD and properties when $PSD(P, Q_n)\to 0$** While higher order PSD becomes increasingly complex, we can explicitly consider the case of $P=N(\mu_P, \Sigma_P)$ and $r = 1$. Here, $PSD(P,Q_n) = ||n^{-1}\sum_{i=1}^n\Sigma_P^{-1}(\mu_P-X_i) ||_2 = ||\Sigma_P^{-1}(\mu_P-\bar{X}) ||_2$ is measuring the Mahalanobis distance between $\mu_P$ and the sample mean $\bar{X}$. From the law of large numbers as $n\rightarrow \infty$, $\bar{X}\rightarrow \mu_Q$ and consequently, $PSD(P,Q)\rightarrow \Sigma_P^{-1}(\mu_P-\mu_Q)$ is zero if and only if the moments of $P$ and $Q$ match. Assuming iid draws from $Q$, the CLT gives $\sqrt{n}\Sigma_P^{-1}(\bar{X}_Q-\mu_q)\sim N(0, \Sigma_P^{-1}\Sigma_Q\Sigma_P^{-T})$ so $PSD = ||Y +\Sigma_P^{-1}(\mu_q-\mu_P)||_2$ where $Y\sim N(0, n^{-1}\Sigma_P^{-1}\Sigma_Q\Sigma_p^{-T})$. The PSD behaves as the square root of a noncentral $\chi^2$ distribution. This result for $r=1$ serves as an illustrative example for iid samples and Gaussian targets. More practically useful theory, including for correlated samples, would be substantially more challenging and we will mention it as future work in the paper. **Simpler formulation for identifying specific moments where discrepancies occur** If the target is Gaussian with diagonal covariance, then for the form of PSD we consider, the individual terms in the PSD correspond exactly to the moments of interest, as explained in the paper. This makes identification of problematic moments simpler than for alternative polynomial kernels. **Benefits of other spanning polynomials?** Unfortunately the benefits of orthogonal polynomials such as Chebyshev polynomials would likely be lost with PSD. It is not clear that one obtains a Chebyshev polynomial after applying a Langevin Stein operator to a Chebyshev polynomial.
Summary: This paper proposes a class of monomials as test functions in kernelised Stein discrepancies, in order to speed up computations. It shows that when the target is Gaussian then the method works well. Claims And Evidence: The claims are * the method detects differences in the first r moments in the Bernstein-von-Mises limit; * the corresponding KSD test has higher power than some of its competitors, in some empirical examples. It is the first claim that I find misleading. The paper shows that when the target is Gaussian, the method detects differences in the first r moments. It somewhat artificially puts the result in a Bayesian framework by instead of clearly stating that the method works well on Gaussian target, the Gaussian is called the `Bernstein-von-Mises limit'. Methods And Evaluation Criteria: The empirical evaluations look ok, although the code is not available (for blinded review, it states, but it is possible to put the code on an anonymous github site). Theoretical Claims: The proofs are correct from what I see. However the method is based on comparing high moments. If the underlying distribution is, say, Cauchy, and not Gaussian, then it is not clear that the method would work at all. While the limiting behaviour of posterior mean (or mode or median) often follows a Gaussian distribution, this is what I believe the authors call the Bernstein-von-Mises limit. This convergence only holds under some conditions; see for example papers by Spokoiny et al. and Kasprzak et al. which show that the rate of convergence depends on the moments. Hence when the moments in the test are high, the posterior mean may still be far from normal and hence the asympotic regime is not warranted. Experimental Designs Or Analyses: The experimental setup in the main paper is standard and useful to compare against other distributions. In the supplementary material also a Rosenbrock target is used. I found it difficult to get more information about this distribution; it seems to have an unknown normalising constant and a narrow ridge around the mode. This target distribution is clearly not Gaussian and the simulations show that the proposed method does not work very well in this case. Supplementary Material: I looked at all of it. Relation To Broader Scientific Literature: In the area of Stein's method there are results available for multivariate normal approximation using as test functions those of polynomial growth, Gaunt, Robert E. "Stein’s method for functions of multivariate normal random variables." (2020): 1484-1513. It would be good to relate the approach in the paper to theoretical underpinnings; in particular bounds on convergence rates could be of interest. It would also be good to relate the results to quantitative Bernstein-von-Mises theorems such as the ones mentioned above. The resampling method is very related to Xu, Wenkai, and Gesine D. Reinert. "A kernelised Stein statistic for assessing implicit generative models." Advances in Neural Information Processing Systems 35 (2022): 7277-7289. That paper includes normal approximation as a special case and gives explicit theoretical guarantees for re-sampling. Essential References Not Discussed: None beyond the above mentioned papers. Other Strengths And Weaknesses: The idea of using polynomials as test functions is new in the context of KSD. However in my view the paper is hiding the key results under the notion of Bernstein-von-Mises limit, when really it means that the target is Gaussian. This could be seen as misleading. Other Comments Or Suggestions: Whether or not the rth moments exist is not discussed in he paper. Questions For Authors: * What happens if the target does not have any moments? * How much do the results depend on the Langevin Stein operator? What happens if a different Stein operator is used? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your time reviewing our manuscript and for your valuable comments and suggestions. **On the BvM limit** Proposition 3.2 explicitly states the Gaussianity assumption without mentioning the BvM limit, but we agree that this can be clarified in other parts of the text. We will reword the text in the abstract and the introduction of the paper. We will replace the text "we prove that it detects differences in the first $r$ moments in the Bernstein-von Mises limit" in the abstract with "we prove that it detects differences in the first $r$ moments for Gaussian targets." We opt to keep the text about the BvM as a remark after our theoretical result. This is relevant to the practical application of the method because a major application of KSD and related methods is in fast but biased Monte Carlo methods, such as SG-MCMC, where posterior distributions are often close to Gaussian. Bardenet et al (2017) highlight this in their review paper on MCMC for tall data, stating ``we emphasize that we have only been able so far to propose subsampling-based methods which display good performance in scenarios where the Bernstein-von Mises approximation of the target posterior distribution is excellent. It remains an open challenge to develop such methods in scenarios where the Bernstein-von Mises approximation is poor." **Method is based on comparing high moments** As mentioned in Section 3.3, lower-order moments are often of interest and they can be where discrepancies are most likely to appear. We give the example of SG-MCMC, where large step-sizes lead to over-estimation of the posterior variance and small step-sizes can lead to under-estimation of the posterior variance. We therefore do not necessarily agree that the method is based on comparing high moments. **Rosenbrock target context** We will explain that it is challenging because there is high probability mass in a narrow, curved region with complex dependencies and different scales across dimensions. We will add references to the original work (Rosenbrock, 1960), the probabilistic version (Goodman and Weare, 2010) and a paper talking about its complexity (Pagani, 2022). **Gaunt (2020) and relations to theoretical underpinnings** Thank you for the reference. Since the polynomial functions are not Lipschitz, we cannot directly compute the solution of Stein's equation and establish bounds on the derivative. The suggested paper seems to describe a technique to establish the required bounds for unbounded test functions which seems suitable for the PSD case. However, it is not clear how to establish this for our formulation of PSD. Further, the main goal of the paper seems to focus on establishing the closeness of the distribution of $g(W)$ to the distribution of $g(Z)$, where $W$ is a sequence of random variables approximating a standard Gaussian. We are unsure of the relevance of this continuous mapping theorem to our problem - what would be a possible interpretation of the function $g$? Initial results on the convergence of PSD for a simple setting are given in response to Reviewer JxPo. **Relationship to Xu et al (2022)** The resampling method is indeed similar to the method we have considered. Following Liu et al (2016), we are able to establish the asymptotic exactness of our bootstrap procedure. Theoretical guarantees for resampling under a normal approximation could be a valuable reference to extend the method. Upon our preliminary review of that paper, we feel that the results in Appendix $B.1$ are of interest and we will cite this as a potential extension for the bootstrap procedure. Please let us know if you were referencing a different theorem or section of the paper. **What if moments do not exist?** We believe this could affect things in two ways. First, we might not have the critical zero expectation property for the Stein operator applied to the polynomial, hence we might not expect that PSD $= 0$ when $P = Q$. Second, we will not be assessing moments exactly because the gradients of the log target will not be linear in $x$. We will mention limitations around moments not existing in the discussion. **Dependence on Langevin Stein operator** The advantage of using the Langevin Stein operator is that when it is applied to monomials for a Gaussian target then we obtain monomials back, and therefore we are tracking convergence in moments. This may not hold true for other Stein operators. Nevertheless, applying diffusion Stein operators and other Stein operators, such as those considered in Kanagawa et al (2022), to different types of polynomials is an interesting question and can be studied further in future work. We will mention this in the discussion. **Anonymous Github site** The full set of code has been provided as a zipped folder in the supplementary material for (optional) review. We would prefer to use the first author's Github account if the paper is accepted to encourage appropriate attribution. --- Rebuttal Comment 1.1: Comment: Thank you for the explanations. As you will take the comments and suggestions on board I am happy to adapt my recommendation to accept. --- Reply to Comment 1.1.1: Comment: Thank you for your comments and for adapting your recommendation to accept. We would like to follow up our earlier response about the case where moments do not exist with information about the performance of KSD for the Cauchy distribution. Theorem 10 from Gorham and Mackey (2015) shows that KSD with standard bounded kernels ($C_0$) fails to dominate weak convergence when the target has a bounded score function. In the case of the Cauchy distribution, $\mid \nabla \log p(x) \mid = \mid \frac{2x}{1+x^2} \mid \leq 1$. Hence it follows that KSD (and the linear time variants) also fail for Cauchy distributions. We will add this to our discussion about applications where moments do not exist.
Summary: The paper proposes a Stein discrepancy metric (PSD) for hypothesis testing. Rather than attempt to target a broad set of functions, the paper argues that by restricting to polynomial moments (up to some order), one can obtain a metric that is light-weight for computation and also has better statistical power. Claims And Evidence: The authors claim some empirical benefits and provide appropriate benchmarks. They also provide a method for approximating their discrepancy metric from samples, and compute some asymptotics to characterize the quality of the approximation. Methods And Evaluation Criteria: Yes, the benchmarks are thorough and the comparison between methods seems fair. Theoretical Claims: As noted in claims, the primary theoretical claim relates to the asymptotics of their approximate statistics. There are also detailed derivations to justify correctness of discrepancy metric. Experimental Designs Or Analyses: The paper provides some assessments of PSD against other common KSDs, for common/fairly standard data sets. Supplementary Material: I did not have the opportunity to thoroughly review the supplementary material; I only skimmed the proofs and did not thoroughly check the experimental details. Relation To Broader Scientific Literature: The paper relates to known literature on the construction/selection of KSDs. More specifically, it discusses common KSDs in the context of their computational tractability. The relative merits and downsides of various prior schemes are mentioned, in order to highlight the need for KSDs which are easily computed and yet which still have statistical advantages. Essential References Not Discussed: As far as I am aware, the most relevant references have been covered. Other Strengths And Weaknesses: I find the main idea behind this work to be rather compelling; namely, I like this concept of testing against a smaller subset of functions. I believe this could be interesting if developed further, for instance by considering other classes of test functions. My main concern is that the paper could have benefited from further exploration of some of the ideas contained therein. Nonetheless, the statistic is simple, computationally cheap, and I believe would certainly be of use to practitioners. Thus, I would recommend that this paper be accepted as it stands. Other Comments Or Suggestions: Line 322: sfix -> fix Questions For Authors: What if I wanted to test against a specific set of functions other than those given in the paper? Is there any other example of this generating an interesting set of statistics? For instance, if I wanted to test against the moments after applying some transform to my random variable, could I possibly concoct interesting tests this way? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your time reviewing our manuscript and for your valuable comments and suggestions. We will correct the typographical error. **What if I wanted to test against a specific set of functions other than those given in the paper? Is there any other example of this generating an interesting set of statistics?** It is possible to use functions other than polynomials, but the challenge with doing this is that the application of the Langevin Stein operator, which is required for the zero expectation property, modifies the form of the function. This means we may not obtain interpretable forms for other functions like we do for polynomial functions and Gaussian distributions. As an example, if one were to consider $g(x) = \sin(x)$, then after the application of a second-order Langevin Stein operator for a one-dimensional unit Gaussian target, the function would become $\mathcal{A}_x^{(2)} g(x) = -\sin(x) - x \cos(x)$. A discrepancy based on $g(x) = \sin(x)$ would therefore determine whether $\mathbb{E}_Q[\sin(x) + x \cos(x)] = \mathbb{E}_P[\sin(x) + x \cos(x)]$, which is not the same as the desired function $\sin(x)$. **For instance, if I wanted to test against the moments after applying some transform to my random variable, could I possibly concoct interesting tests this way?** This is an interesting question. Appendix D shows that applying an invertible linear transformation does not change the fact that, in the Gaussian case, the test is for the moments up to order $r$ of the original distribution. However, the statistical power can be affected by the choice of transformation. In particular, such a transformation may improve power when the posterior variances differ substantially in scale. It would be interesting to consider other transformations as well, and we will mention this in the discussion. Following up on this and a comment by Reviewer JxPo on the effect of a mean shift on the PSD statistic, we aim to empirically investigate the effect of a mean shift during the review period. **My main concern is that the paper could have benefited from further exploration of some of the ideas contained therein.** We have further explored and commented on several topics as part of this review. We would be happy to consider further exploring other aspects of the paper if you would like to provide further details. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I would be interested in seeing these extensions. As my initial appraisal was already positive, I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for your comments and interest in seeing the new results. We have now investigated the concept of a mean-shift further. We believe this will be of interest to you and Reviewer JxPo. As previously mentioned, using a mean-shift does not affect which discrepancies PSD is theoretically capable of detecting but it may affect PSD's statistical power in doing so. Upon further investigation, we have found that PSD with $r=1$ is mean-shift invariant because it is based solely on the score function, which does not change with such a transformation. However, PSD with higher $r$ uses both the samples and the score function in the discrepancy so it can be sensitive to mean-shifts. One could inflate the value of PSD using arbitrary mean shifts. To investigate this further, we have performed empirical investigations into the effect of the transformation $\tilde{x} = x - \mu_Q$, where $\mu_Q$ is the mean of $Q$. We believe this is the most sensible and practical mean shift to consider. We consider two cases based on a Gaussian $P$ with unit covariance, $N=100$ and $d=5$. We are interested in the case where the mean of $P$ ($\mu_P$) is not the same across all dimensions, since we believe this is where the impact on statistical power will be greatest. For this reason, we consider two cases, (1) $\mu_P = (a,0,\ldots,0)$ and (2) $\mu_P = (0,a,\ldots,a)$. The discrepancy will be in first dimension in both cases (details below), so we are interested in how the relative scales of the means that are correctly specified versus misspecified affect the results. The table below shows the estimated statistical power based on 200 independent simulations. We consider both case (1) and case (2) in situations where the discrepancy is in the (first) mean ($\mu_Q = \mu_P + 0.5e_1$) or in the (first) variance ($\Sigma_Q[1,1] = \Sigma_P[1,1] + 0.5$). A "t" in front of the discrepancy indicates we have performed a mean-shift reparameterisation. | Case | Misspecified | Discrepancy | $a=-1.5$ | $a=-1.0$ | $a=-0.5$ | $a=0.0$ | $a=0.5$ | |------|------------------|-----------------|------|------|------|------|------| | 1 | Mean | PSD$_2$ | 0.70 | 0.11 | 0.06 | 0.16 | 0.87 | | | | tPSD$_2$ | 0.98 | 0.22 | 0.06 | 0.13 | 0.84 | | | | PSD$_3$ | 0.71 | 0.26 | 0.06 | 0.18 | 0.76 | | | | tPSD$_3$ | 0.92 | 0.30 | 0.05 | 0.19 | 0.74 | | 2 | Mean | PSD$_2$ | 1.00 | 1.00 | 0.76 | 0.20 | 0.08 | | | | tPSD$_2$ | 1.00 | 1.00 | 0.84 | 0.14 | 0.06 | | | | PSD$_3$ | 1.00 | 0.96 | 0.72 | 0.26 | 0.06 | | | | tPSD$_3$ | 1.00 | 1.00 | 0.82 | 0.25 | 0.10 | | 1 | Variance | PSD$_2$ | 1.00 | 0.96 | 0.78 | 0.55 | 0.32 | | | | tPSD$_2$ | 1.00 | 0.98 | 0.84 | 0.54 | 0.32 | | | | PSD$_3$ | 0.96 | 0.92 | 0.70 | 0.36 | 0.17 | | | | tPSD$_3$ | 0.97 | 0.90 | 0.70 | 0.32 | 0.17 | | 2 | Variance | PSD$_2$ | 0.97 | 0.83 | 0.62 | 0.39 | 0.27 | | | | tPSD$_2$ | 1.00 | 0.98 | 0.88 | 0.56 | 0.34 | | | | PSD$_3$ | 0.83 | 0.65 | 0.44 | 0.29 | 0.16 | | | | tPSD$_3$ | 0.98 | 0.90 | 0.70 | 0.36 | 0.16 | The results demonstrate that the original mean scaling affects the statistical power. The performance with the mean-shift reparamerisation is generally similar to, if not slightly better than, the performance with no reparameterisation. Importantly, the choice of parameterisation is a problem that affects Stein discrepancies more generally. KSD with radial kernels, i.e. kernels that are functions of $\|\| x-y \|\|$ like the Gaussian kernel, are mean-shift invariant. However, they are still sensitive to other reparameterisations, such as whitening. Determining the optimal parameterisation for Stein discrepancies is an open problem and an interesting point for further research. We will highlight this and the sensitivity of PSD to mean-shifts when $r\geq 2$ in the discussion.
Summary: The authors introduce a variant of Stein discrepancy which uses bounded degree polynomials as a Stein set. Computing the proposed Polynomial Stein Discrepancy (PSD) is straightforward using evaluations of the target score $\nabla \log P(x)$ and samples from the proposal distribution $Q$. For degree $r$, evaluating PSD given $n$ samples takes time $O(n {d+r \choose d}) \sim O(n d^r)$, whereas Kernel Stein Discrepancy requires $O(n^2 d)$ steps, which can be prohibitive for large sample sizes. To validate PSD, the authors compare it to KSD with IMQ and Gaussian kernels, as well as two competing linear time discrepancy metrics (RFSD and Gauss FSSD-opt). For goodness of fit testing, the PSD demonstrates exceptional performance relative to competing methods in distinguishing $P=\mathcal{N}(0, I_d)$ from a few different proof-of-concept $Q$ distributions, even outperforming KSD. The PSD is also competitive with KSD and outperforms other linear time methods for goodness-of-fit testing for the samples generated by an RBM. Finally, for parameter tuning in a simple test-case LMC method, the optimal parameter choice under PSD agrees with that of KSD, while being significantly cheaper to compute. ## update after rebuttal I appreciate the authors engagement with my questions and I keep my score. Claims And Evidence: The experimental claims made in this work are supported by established benchmarks. The derivations and proofs in this work are correct to the best of my knowledge. Methods And Evaluation Criteria: The experimental claims in this work are well supported. The authors compare PSD to other algorithms using established benchmarks, such as the goodness-of-fit tests used in Gorham & Mackey 2018, the RBM sampling task used in Liu et al. 2016 and Jitkrittum et al. 2017, and the bimodal Gaussian task introduced in Welling & Teh (2011). Theoretical Claims: I checked the derivation of PSD, the proof of Proposition 3.2, and the proofs detailed in Appendices A and B. Experimental Designs Or Analyses: I am unfamiliar with the literature on discrepancy metrics for testing sample quality. However, to the best of my knowledge, this work has used well-established experimental baselines to demonstrate the efficacy of the proposed approach. To the best of my knowledge the comparisons made between PSD and competitors are fair. Supplementary Material: I reviewed Appendices A and B and skipped Appendix C. Relation To Broader Scientific Literature: This paper belongs to the literature on Stein discrepancies and goodness-of-fit testing. The proposed method is a conceptually simple instantiation of this idea which is shown to outperform other linear-time methods in the literature. Essential References Not Discussed: N/a Other Strengths And Weaknesses: One strength of this work is that its writing is extremely clear and easy to understand. Also, the authors give many practical insights, e.g. "we recommend the bootstrap test in general because we empirically find that it has higher power than the asymptotic test using samples from $Q$" "we have also found empirically that the power of the goodness-of-fit test based on RFSD is reduced when direct sampling from $P$ is infeasible," etc. Other Comments Or Suggestions: In Appendix A, only the first sequence of equalities is necessary. The fact that $\\|\\bar{z}\\|\_2^2 = \\sup\_{\\beta \\in \\mathbb{R}^J : \\|\\beta\\|\_2 \\leq 1} \\sum\_{k=1}^J \\beta\_k \\bar{z}\_k$ is a well known property called 'self duality of $l^2$ norm' and it does not need to be re-proven. Some very minor typos: - Line 104 left 'PSD discrepancy' should just be 'PSD' - Line 95 right should read $\mathbb{E}_{X \sim P} [(\mathcal{A}_x^{(2)} g)(X)]$, inside the expectation is a random variable Questions For Authors: When does KSD outperform PSD? In Figure 1, there are no such examples, and in Table 1 setting $r > 1$ is sufficient to match KSD. What are some practical examples of non-moment-based perturbations that KSD can detect but not PSD? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your time reviewing our manuscript and for your valuable comments and suggestions. We will correct the typographical issues and the appendix as suggested. **When does KSD outperform PSD? In Figure 1, there are no such examples, and in Table 1 setting $r>1$ is sufficient to match KSD. What are some practical examples of non-moment-based perturbations that KSD can detect but not PSD?** Probability distributions are completely determined by their moment generating functions, provided it exists (a necessary condition is that all moments are finite). Therefore, we can potentially expect the KSD to outperform PSD for target distributions lacking well-defined moments (e.g. those with heavy tails). We comment on this further in response to Reviewer 8qrD. We can also expect KSD to outperform PSD of order $r$ when the discrepancies lie in moments higher than the $r$th order moment. For example, KSD outperforms PSD with $r=1,2,3$ for the student-t and Laplace examples in Figure 1 since the discrepancy is in the kurtosis ($4$th order moment). We will add details of when we can expect KSD to outperform PSD to the discussion. --- Rebuttal Comment 1.1: Comment: Thank you for the informative answer as to when KSD outperforms PSD. This helps me understand better the method. I plan to keep my score.
null
null
null
null
null
null
Doubly Robust Conformalized Survival Analysis with Right-Censored Data
Accept (spotlight poster)
Summary: This work presents a novel conformal inference framework for survival analysis with right-censored data, motivated by the limitations of existing conformal methods for construction lower prediction bounds (LPBs) for general right-censored data. The core idea is to fit a censoring distribution and sample from the corresponding truncated distribution (“Decensoring”), changing the right-censored dataset into a semi-synthetic “decensored” dataset, where the existing methods apply. This paper proved in Theorem 3.3 and 3.6 that the marginal coverage of constructed LPBs on ideal assumption has asymptotic double robustness via a fixed or adaptive censoring-time filter technique. Experiments results have shown that this framework, leveraging an adaptive cutoff, yields the best LPB, compared to other calibration methods. Claims And Evidence: Mostly. See below. Methods And Evaluation Criteria: Yes Theoretical Claims: Math proofs seem valid but are not fully checked. Experimental Designs Or Analyses: 1. Ablation of “decensoring” in model evaluations: The novelty in methodology of this work mainly relies on the “decensoring” of calibration dataset added to the established fixed/adaptive cut-off strategies. The claim that “as long as $f{c|x}$ is reasonably accurate reasonably accurate, our two-step method is anticipated to yield approximately valid inferences” in section 2.3.3 is not backed with information about how accurate they are in experiments. Even if double robustness does not require perfect censoring distribution theoretically, the discussion on how the performance of pre-trained censoring distribution would impact the conformal LPB is meaningful in practice. Supplementary Material: The Appendix section has been reviewed. Codes are not reviewed. Relation To Broader Scientific Literature: “Decensoring” or, more broadly, modelling both censoring and time-to-event is not a new idea. The key contribution of this work is to establish an integrated theory and methodology with the established conformal survival analysis. Essential References Not Discussed: An important contemporary work is founded here https://openreview.net/forum?id=JQtuCumAFD They introduced two methods: a “focused” method as dropout and “fused” method as a similar imputation method. According to their claim, their LPBs are finite-sample valid in section 1.3, also approximately valid in section 3.1. Other Strengths And Weaknesses: Strength: Detailed information with the major conformal methods in terms of pseudo-codes is nice. Weaknesses: If I understand Table A7 correctly, for SOTA models (random survival forests) in real-world observation datasets, DR-COSARC is not as competitive as Naive CQR. The writing of this paper needs to be improved. 1. Related work should be expanded and reference to censoring time imputation should be added, while repeated citations of Candes et al. (2023) and Gui et al. (2024) in Section 2 should be simplified. Use a preliminary section if necessary. 2. While the computational cost of conformal prediction methods can be justified by the need for informed decision-making in fields like healthcare and finance, the potential computation cost in the imputation step and the conformal step should be mentioned as potential limitations of scalability in the Discussion. Other Comments Or Suggestions: 1. The authors only mentioned DeepSurv (Katzman et al., 2018) and Random Survival Forest (Ishwaran et al., 2008) in the introduction and they only evaluated Random Survival Forest in Section 4.1. 2. The implementation of random survival forests is not consistent. “e.g., implemented via the R package ranger” in section 2.3.3, while it becomes “randomForestSRC” in section 4.1. 3. Typo in line 181. 4. Typo in the title of Algorithm A5. Questions For Authors: I have some practical questions: 1. Is it fair to hold out 20% calibration data in the uncalibrated method? How are pretrained models obtained? 2. As a main takeaway of this work, this framework is competitive in simulated cases where both pre-train models are badly fitted: How do we know they fit badly? The censoring distribution modeling would flip the event indicator. How does the censoring rate in the training dataset affect the performance of pre-train models (and thus the proposed framework)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and encouraging evaluation. ## Accuracy of the Censoring Model We appreciate your interest in how the censoring model’s accuracy affects performance. However, we believe there may be some misunderstanding about how best to assess this in practice. Rather than reporting standard imputation error metrics—which would be both difficult to interpret in our context and unobservable in real applications—we evaluate performance by varying key simulation parameters that directly affect model quality, such as training sample size (Figures A2, A3, A6, A7) and the number of irrelevant features (Figures A5, A9). These factors offer a more meaningful and observable way to assess the impact of censoring model accuracy. Collectively, these experiments support our central empirical claim: the method achieves valid inference as long as either the survival or censoring model is reasonably accurate. This double robustness is the core strength of our approach and is clearly demonstrated in our results. ## Discussion of Concurrent Work The paper by Davidov et al. (2025) is concurrent with ours. It appeared on OpenReview one week before the ICML submission deadline—after our submission—and prior to that, only an anonymous version was available, without author names. This made it both too recent and too ambiguous to cite appropriately. Moreover, even the anonymous version was first posted within four months of the ICML deadline, and we became aware of it only when our paper was already essentially finalized. While the two papers address related goals, the methods are conceptually very different. It would be interesting to see a detailed empirical comparison in future work. ## Empirical Performance Relative to Naive-CQR This seems to be a misunderstanding. Table A7 shows our method yields higher (more informative) LPBs than Naive-CQR on most datasets—except COLON, where performance is similar. This aligns with Figure 2 and expectations: Naive-CQR does not properly handle censoring and tends to be too conservative (although not always). ## Writing of Sections 1.3 and 2 Thank you for the suggestion—we will expand the related work section in the camera-ready version. In Section 2, we intentionally highlight Candès et al. (2023) and Gui et al. (2024), as our method directly builds on and generalizes their work. While their names appear multiple times, each instance clarifies this connection. We believe the structure is appropriate but would welcome specific suggestions if further streamlining is needed. ## Computational Cost We agree this deserves mention. The imputation step adds negligible overhead—once the censoring model is fitted (as also required by existing methods), sampling is extremely fast. Compared to model fitting, the additional cost is minimal. We will clarify this in the revised version. ## Other Comments or Suggestions While Section 4.1 focuses on experiments using a generalized random forest (grf) for the survival model, we do evaluate other models in the appendix, as described in Section 4.3. We will clarify this point in the main text and fix the minor inconsistencies and typos you pointed out. ## Fairness of Holding Out Calibration Data for Uncalibrated Method This is a fair question. While the uncalibrated method could use more training data, we applied all calibration techniques to the same pre-trained model to ensure a clean comparison. Using different training sets would introduce some confounding. In any case, as shown in Figures A3 and A7, the uncalibrated method performs poorly in harder settings regardless of the training sample size, so using more data would not meaningfully improve its results. We’ll clarify this in the final version. ## Model Misfit and the Motivation for Conformal Inference In practice we wouldn’t know if the pre-trained survival model is accurate enough, and that’s exactly the motivation for conformal inference. Our goal is to provide robust, distribution-free guarantees even when the survival model is unreliable. Our framework is designed to achieve valid inference as long as either the survival or censoring model is reasonably accurate. The simulations were intentionally constructed to reflect this uncertainty and demonstrate our method’s double robustness. ## Varying the Censoring Proportion As the censoring rate increases, the censoring model becomes easier to estimate, while the survival model becomes harder to learn—highlighting the value of our method’s double robustness. Since only one model needs to be accurate, coverage should be maintained across a range of censoring levels. That said, extreme censoring can make our survival bounds less informative (i.e., more conservative), even if their coverage remains valid. This is a general challenge in survival analysis, not specific to our approach. We appreciate this question, and space permitting, will include this discussion and potentially an additional supporting figure in the final version.
Summary: The paper proposes a doubly robust conformal inference method for constructing lower prediction bounds (LPBs) for survival times under right-censored data. By imputing unobserved censoring times using a machine learning model and calibrating survival models via weighted conformal inference, the method theoretically guarantees asymptotic validity if either the censoring or survival model is correctly specified. Extensive experiments on synthetic and real datasets demonstrate robustness in challenging scenarios where existing methods (e.g., Kaplan-Meier decensoring) underperform. The approach extends prior work on type-I censoring to handle more practical right-censoring settings. ## update after rebuttal I updated the overall recommendation from 3 to 4 as my concerns have been adequately addressed. Claims And Evidence: Overall, the claims made in the submission can be supported by theoretical proofs or experiments. Methods And Evaluation Criteria: Comprehensive evaluation across 10 synthetic settings and 7 real datasets. Theoretical Claims: Theorems 3.3 and 3.6 are well-constructed, leveraging double robustness principles from causal inference. However, the proofs assume asymptotic consistency of model estimates, which may not hold with finite training data. Experimental Designs Or Analyses: Synthetic experiments convincingly validate robustness, but lack exploration of high-dimensional covariates (p≫1000). Real-data preprocessing (e.g., merging rare factor levels) is not rigorously justified and may introduce bias. Supplementary Material: Implementation code was provided in the supplementary material. Relation To Broader Scientific Literature: The work builds on conformal inference for survival analysis and connects to double robustness in causal inference. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: First method to extend doubly robust conformal inference to right-censored data. Clear theoretical-empirical synergy. Weaknesses: Limited discussion of computational costs for imputation and calibration. Real-data experiments lack domain-specific evaluation (e.g., clinical utility of LPBs). Other Comments Or Suggestions: N/A Questions For Authors: Why use split-conformal instead of full-conformal inference? Could cross-validation improve small-sample performance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful and generally positive review. We appreciate your recognition of the novelty and strength of our contributions and are happy to respond to your comments below. ## Asymptotic vs Finite-Sample Theoretical Results You're right that Theorems 3.3 and 3.6 provide asymptotic double-robustness guarantees. However, as shown in Theorems A3 and A5 (Appendix), we also establish finite-sample coverage results for both the fixed- and adaptive-cutoff versions of our method. While these bounds are somewhat loose and not directly useful for practitioners, they rely on weaker assumptions and are key to establishing asymptotic validity, offering some theoretical support for our method’s strong empirical performance. The challenge of deriving tighter finite-sample guarantees reflects the inherent complexity of conformal inference under censoring—a challenge also noted in Candès et al. (2023) and Gui et al. (2024), even in the simpler type-I setting. In short, censoring makes theoretical analysis harder. Importantly, our experiments show that the adaptive version of our method consistently outperforms the fixed-cutoff version, despite requiring somewhat stronger assumptions for double robustness. This illustrates a broader pattern in modern data science: practical performance sometimes exceeds what theory can rigorously explain. We’ll clarify this point and make the finite-sample results more visible in the revised version. ## Experiments with High-Dimensional Data Our synthetic experiments are designed to evaluate the performance of our conformal inference method under the most relevant practical challenges that may affect the quality of the survival and censoring models—including model misspecification, limited sample sizes, and increasing numbers of covariates. These challenges commonly arise in high-dimensional settings, but in our context they do not require using extremely large feature spaces in simulation. Figures A5 and A9 (referenced in Section A.3) vary the number of irrelevant covariates used to train the censoring model and show that performance degrades as more noise features are added—due to overfitting. This is consistent with double robustness and precisely represents the kind of failure mode that our framework is designed to mitigate. While a sparse learner could further improve performance in such settings (since the true censoring model is sparse), our focus is on imputation and conformal inference, not sparse modeling. Our method is designed to be broadly applicable and can be used with any underlying model, including sparse ones. We will highlight these experiments more clearly in the revised version. ## Data Pre-Processing The preprocessing steps in Appendix A4.1—such as merging rare categorical levels and removing extreme outliers—are standard procedures to ensure model compatibility and stability. We are not aware of any way these steps would introduce “bias” in our experiments. That said, we will clarify and briefly justify them in the final version to avoid confusion. ## Computational Cost Thanks for this suggestion. The imputation step is a key conceptual contribution of our method, but computationally it is very light. Once the censoring model is trained, sampling requires evaluating a one-dimensional integral (Eq. 4), which is either analytical or quickly approximated. This step is not a bottleneck. As with all comparable methods, the main cost lies in training the survival and censoring models. We’ll clarify this in the revised paper. ## Practical Relevance We agree that domain-specific evaluation—such as assessing the clinical value of lower predictive bounds—is important future work. Our goal in this paper was to develop and validate the core methodology, which is why we focused on benchmark evaluations. That said, we believe our survival LPBs have the potential to be useful for practitioners. For example, in healthcare they may inform treatment prioritization based on survival likelihood under resource constraints (as noted by Candès et al., 2023). We will expand this motivation in our introduction to make the practical relevance more self-contained. ## Split vs. Full Conformal Inference, and Possible Extensions using Cross-Validation We chose split conformal inference for its simplicity, efficiency, and ability to support double-robustness guarantees under relatively mild assumptions. These same considerations motivated prior work on conformal survival analysis under type-I censoring (Candès et al., 2023; Gui et al., 2024). Extending our method to full conformal inference, cross-validation, or jackknife+ could potentially improve small-sample performance, but each would involve substantial computational and theoretical challenges—especially due to censoring. We agree these are potentially interesting directions for future work and will mention them in the revised discussion. Thank you for the suggestion!
Summary: This paper studies conformal inference for right-censored data, in which it generalizes prior works beyond the type-I censoring setting (i.e., when all the censoring times are observed). Its main idea is to impute the censoring times by sampling from the estimated censoring mechanism, and obtain a "synthetic" data set whose censoring times are fully observed, on which prior methods can be applied to produce calibrated lower prediction bounds. It is shown that the proposed method enjoys a double-robustness property, achieving asymptotic coverage guarantees as long as either the distribution of $T|X$ or $C|X$ is well estimated. The proposed method is evaluated in extensive simulation and real-data studies, in comparison with existing methods. Claims And Evidence: The main claim of the paper is that it provides calibrated lower prediction bounds for right-censored data, generalizing prior works beyond the type-I censoring setting. The method is said to be asymptotically valid if either the censoring mechanism or the survival distribution can be well estimated. The claim is supported by theoretical results (Theorems 3.3 and 3.6), as well as simulation results. Methods And Evaluation Criteria: The proposed method makes it possible to construct LPB when the censoring times are not fully observed. This is achieved by imputing the missing censoring times by sampling from the estimated distribution of $C|X$. I wonder, however, how stable this algorithm is, considering that the missing censoring times are imputed with random variables. Theoretical Claims: I went over the proof outline, and did not spot any significant issue. Experimental Designs Or Analyses: The experimental design follows from settings in existing works; it also provides comprehensive comparison and sensitivity analysis. Supplementary Material: I went over the proof outline and the additional simulation results. Relation To Broader Scientific Literature: The main claim of the paper is that it provides calibrated lower prediction bounds for right-censored data, generalizing prior works beyond the type-I censoring setting. This greatly improves the applicability of the framework. Essential References Not Discussed: NA. Other Strengths And Weaknesses: NA. Other Comments Or Suggestions: Page 4, line 181, $\hat C i$ -> $\hat C_i$ Questions For Authors: 1. Since the missing censoring times are imputed by sampling from the estimated $C|X$, the results depend on the realization of the random variables. I wonder how stable this algorithm is to different realizations of censoring times. 2. (If I understand correctly) The proposed method (fixed version) does have finite-sample guarantees when $C|X$ is known, which is perhaps worth emphasizing. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and for your positive evaluation of our paper. We appreciate your careful reading, and we are happy to answer your two very insightful questions. ## Imputation randomness *"Since the missing censoring times are imputed by sampling from the estimated $C \mid X$, the results depend on the realization of the random variables. I wonder how stable this algorithm is to different realizations of censoring times."* This is an excellent question. Conceptually, we believe statisticians should not view randomness as inherently problematic. Imputing missing censoring times by sampling from their estimated conditional distribution given observed data is a natural and principled strategy. Even if the censoring times were observed directly, they would still be treated as realizations from a random process—our method simply mirrors that generative view using model-based imputation. This allows us to reduce a more complex model-free problem (survival analysis under general right censoring) to a more tractable one: survival analysis under type-I censoring, for which conformal methods are more easily applicable. While data randomness is fundamental to statistical inference in general, many conformal inference methods in particular introduce additional randomness into the observed data—most notably through random sample splitting between training and calibration sets. The random imputation step in our method is thus fully consistent with the broader conformal inference framework and, in fact, this randomness plays a key role in enabling formal coverage guarantees. That said, your point about stability is well taken. To address it concretely, we will include a figure in the camera-ready version of the paper that empirically assesses the variability of the results obtained by applying our method to the same observed data set across different imputations of the latent censoring times. These experiments will show that, as long as the calibration sample size is at least moderately large (on the order of a few hundred observations), the sampling variability introduced by our method is minimal and does not meaningfully impact the informativeness of our predictive inferences. Moreover, as expected, this variability decreases as the calibration sample size increases. Prompted by your suggestion, we will also mention in the discussion that one could explore possible extensions of our method that further reduce this variability by leveraging recent developments in e-value-based methods. We expect that such extensions may reduce variance at the cost of increased conservativeness—an interesting tradeoff that we agree is worth flagging as a possible direction for future work. ## Finite-sample theory *“(If I understand correctly) The proposed method (fixed version) does have finite-sample guarantees when $C \mid X$ is known, which is perhaps worth emphasizing.”* Yes, you are absolutely correct. Indeed, Theorem A3 in Appendix A5.3 provides a finite-sample guarantee for our method under the fixed-cutoff implementation, which becomes tighter if the censoring mechanism is known. We will highlight this more clearly in the main text. Additionally, we have also established finite-sample validity for the adaptive-cutoff implementation in Theorem A5 (Appendix A5.4). These results are somewhat more conservative—not because the method is less effective in practice, but because the theoretical analysis is more technically involved. This is similar to what Gui et al. (2024) encountered in their analysis under type-I censoring. We believe this distinction is worth clarifying in the camera-ready version: while the adaptive version of our method performs better empirically (as our results show), it is more challenging to analyze theoretically. This situation is not uncommon in modern statistical methodology, where practical performance can sometimes outpace what current theory can tightly capture. We’ll make sure to emphasize that the theory remains sound in both cases and the gap lies primarily in the sharpness of the finite-sample bounds, not in the validity or robustness of the approach itself. Thank you again for your helpful comments. We believe your suggestions will help improve the clarity and impact of the final version.
Summary: The paper addresses the problem of constructing lower prediction bound (LPB) for survival time under conditional independent right censoring. In particular, the paper extends the approaches in Candes et al. (2023) and Gui et al. (2024) for type I censored data, which assumes that the right censoring time is always observed, to the censoring scenario that are more commonly encountered in practice where the right censoring time is only observed for censored subjects. The proposed approaches first impute the missing censoring times from the estimated conditional censoring distribution, and then apply the methods in Candes et al. (2023) and Gui et al. (2024) for type I censored data. With an additional adjustment, the proposed LPBs enjoy a doubly robust (DR) property: the obtained LPB has asymptotically valid marginal coverage when either (i) the conditional censoring distribution is estimated sufficently well, or (ii) the conditional quantile of event time is consistently estimated and the true conditional event time distribution satisfies a smoothness condition. In addition, when (ii) is true, the proposed LPB also also has valid approximate conditional coverage. Simulation studies are done to compare the performance of the proposed LPBs with existing apoproaches. The results show that the proposed LPBs is robust in challenging settings. The proposed approaches are applied to construct LPB for survival times in seven publicly available datasets. ## update after rebuttal I updated the overall recommendation from 2 to 4. Previously I choose 2 because there are errors in a math expression and the algorithm in the main text missing the important DR adjustment step which caused confusion. The authrors clarified that the error in the math expression is only a typo and they will revise the algorithms and writing of the paper. These resolves the above concerns. As for the assumption in the theory being a strong assumption, I agree that it is okay to keep it as it is for the ICML paper, without making the paper more technical. Also the authors mentioned that they improve on addressing the simulation results in the main text. Claims And Evidence: Overall the claims made in the submission is supported by clear and convincing evidence, but I found their statement of the algorithms and the claims for DR results potentially confusing and misleading. Their algorithm first impute the missing censoring times and then applied the existing approaches for handling type-I censored data, and they showed in Proposition 2.2 that if the true conditional censoring distribution is known, the data with the imputed censoring times follows the same distribution as the hypothetical type-I censored data if all the censoring times were observed. This justifies the use of the imputed censoring times and the application of the existing approaches for type-I censored data. While reading, that makes me think that the proposed approach will rely on the conditional cenosring distribution to be estimated resonably well, so it is kind of surprising that their approach enjoys the DR property. It turns out that their algorithms also has a DR adjustment step, which takes the minimum of (i) the LPB from the existing approaches under type-I censoring and (ii) the estimated quantile function of the conditional event time distribution. Since (ii) has approximately valid marginal coverage if the quantile function of the conditional event time distribution is estimated sufficently well, by definition, the adjusted LPB has the DR property claimed in the paper. However, this key step of adjustment is barely mentioned in the main text and it is burried in Algorithm A4 and A5 in the appendix (except mentioned in algorithm 3 Step 4 and 5, but it replicates the DR adjustment step of Algorithm A5 in Step 3, and it is not higlighted to be an adjustment step for DR). So I think their current writing is potentially confusing. More discussion about this key step and the consequence and potential limitations of taking the minimum of two LPB is needed for clarity. Methods And Evaluation Criteria: Yes, the proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem or application at hand. Theoretical Claims: I checked their theoretical results in Proposition 2.2, Theorem 3.3 and Theorem 3.6 and the corresponding assumptions, as well as some of the proofs in the appendix. Besides the concerns about their statement of the algorithms and the DR results mentioned in the section of “Claims and Evidence”, I’m also concerned with the second condition in their Assumptions 3.1 and Assumotion 3.4, which incolves the sample size n for the calibration set. The statement is about the limit of a quantity related to the estimation error of the conditional censoring distribution converges to zero faster than 1/n rate, as the sample sizes n and N goes to infinity. I guess N denotes the sample size of the data used to train the censoring model, but the definition of N seems to be not mentioned in the paper. My concern for this assumption is how strict it is. In practice, the integral term in their assumption that involves the estimation error of the conditional censoring distribution usually converges at no faster than N^{-1/2} rate, which is the parametric rate of convergence. So it seems that this condition requires that n is of order at least N^2, which is pretty large. The choice in the simulation section with sample size 1000 for each set seems to not follow this requirement.I think more discussion and examples of when this condition can be satisfies will enhance the theory part. Also more discussion about how the Assumptions 3.1, 3.2, 3.4, 3.5 compare with the assumptions in the literature will make the theory part stronger. There is an error in equation (4). The limits of the integral should be from \tilde T_i to infinity? Discussion from line 180 – 184 (left) is inaccurate. The synthetic sample shares the same distribution as the ideal sample only if \hat f_{C|X} is exactly equal the truth. Experimental Designs Or Analyses: I read the simulation in the main text and briefly browsed the additional results in the appendix. The simulation studies is done under various situations, but few discussions are made on how the results relate to their theoretical results of double robustness. I think the paper would benefit from discussing under which scenarios, the proposed methods are (1) expected to have good estimate for the censoring model and/or event time model, (2) expected to show good performance according to their theory, and (3) expected to outperform the comparison methods, etc. For example, their paragraph on “Leveraging Prior knowledge on P_{C|X}” makes me think that this probably relate to how well the conditional censoring distribution can be estimated under these models. I’m also curious about how the performance of the proposed approaches compares with their counterparts without the DR adjustment step. In simulation results, the lower bound for the proposed LPB is lower than the naïve CQR, which is not surprising as the DR adjustment step in the proposed algorithms takes the minimum of the two LPB where one of them is the naïve CQR. I suggest commenting on this result as it relate to the potential limitation of the proposed approach. For application, the event time distribution is estimated under different models but the censoring distribution is always estimated using grf. Why only one censoring model is considered instead of multiple ones like for the event time distribution? The discussion of the application results can be more informative. For example, in line 414 (left), “the comparatively simpler nature of survival analysis with these datasets”. Do you mean the true distribution of the T|X or C|X follow simplier models, or there are fewer covariates, or the model can estimate the conditional distributions T|X, C|X well so that the error rate condition required in your Assumptions are more likely to hold? For the data analysis results in the appendix (Table A4, etc.), what is the unit of the LPB. I saw the numbers varied a lot across datasets. Does it mean there is more uncertainty in some than the other, or they are due to different units of the time-to-event in the datasets? How can we tell if they are informative? Supplementary Material: I did not review the supplementary material. I reviewed some parts of the appendix as mentioned above. Relation To Broader Scientific Literature: The paper contribute to the conformal inference literature by extending the existing approach for constructing LPB for time-to-event data under type-I censoring to the more general type of right censoring encountered in practice where the censoring times are not always obseved. The paper extend both the algorithms with fixed cutoffs and with adaptive cutoff for type-I censored data. Essential References Not Discussed: I’m not aware. Other Strengths And Weaknesses: Strength: Originality for addressing the problem of conformal inference for right censored data under a more realistic censoring setting than the type-I censoring that has been studied in the literature. Comprehensive simulations and real data analysis. Weakness: The writing and flow of the paper can be improved. For example, perhaps Section 2.2 can be folded in 1.3. The last paragraph in the right column in line 157 may be simplified by saying that (3) is the conditional density of C|X,\tilde T, E=1. The clarity of the writing may be improved if the authors use detailed and accurate wording. For example, what does “robust” and “delicated” mean in “tend to be more robust in more delicated scenarios” (line 088 left). Other Comments Or Suggestions: In line 181, Ci, i should be subscript. Questions For Authors: 1. How the two proposed algorithms and the writing of the paper can be revised to highlight the key adjustment step for DR property and acknowledge the limitation that this adjustment step may cause? 2. What is the cost of achieving the DR property with the adjustment step? Are there any justification that with this adjustment step the LPB is still informative enough? Does it worth to achieve DR at the cost of this adjustment step? 3. Can the second condition of Assumptions 3.1 and 3.4 can be relaxed? 4. How can the simulation studies be better designed or how can the current simulation results to be better intepreted to show the advantage of the DR property of the proposed approaches? I think addressing the above questions will improve the clarity of the paper and make the paper much stronger. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed and thoughtful review. We find your feedback very helpful. While some presentation issues may have caused confusion, they are easily resolved and do not reflect inherent flaws. We respond below and will incorporate your suggestions into the revised paper. ## DR Adjustment You're right that the DR adjustment is essential for establishing double robustness. This is discussed in Section 2 and shown in line 5 of Algorithm 3. The confusion arose because it is also part of the fixed-cutoff method but was accidentally omitted from Algorithm 2 (though present in Algorithm A4). We will fix this omission and also correct the outdated sentence comparing Algorithms 2 and 3. Empirically, the DR adjustment typically has a small effect, as the preliminary LPB very often is already lower than the uncalibrated quantile. Based on your comment, we will include in the appendix an additional figure showing this explicitly. ## Convergence Rate of Censoring Model We agree the convergence rate assumed for the censoring model in the theoretical analysis is quite strong, but we highlight this directly after Assumption 3.1, noting it suggests more data should be allocated to training than calibration. We view this as a reasonable and useful recommendation. That said, it may be possible to relax the converge rate under stronger assumptions or with more technical effort. However, we do not see a compelling need nor an obvious way of doing so without making the paper far more technical than is appropriate for ICML. Our main goal is to establish the soundness of the method and highlight its double robustness—a key property not shared by existing alternatives. Whether the convergence rate assumption can be weakened seems secondary. In practice, what matters is whether the method performs well, and our results show that it does. (After all, it is unclear what it would even mean for a convergence rate to “hold” for a single finite dataset.) We would of course welcome future work that seeks to tighten or relax this theoretical condition. ## Censoring Time Imputation Thank you—you are right about the typographical error in Equation (4); we will correct the integration limits. We will also revise the vague phrase “accurately estimates” to clarify that the synthetic sample matches the target distribution only if $\hat{f}\_{C \mid X} = f\_{C \mid X}$. ## Comparison with Naive CQR There appears to be a misunderstanding. As stated on page 6, Naive CQR calibrates CQR to predict $\tilde{T} = T \land C$ instead of $T$, which leads to overly conservative bounds. This is very different from our DR correction, which uses the survival model’s uncalibrated quantile. We do not take the minimum with the Naive CQR bound, and doing so would be unjustifiably conservative. Indeed, our method produces more informative bounds than Naive CQR, as shown in Figures 1 and 2. ## Design of Empirical Experiments We conducted a thorough empirical evaluation, much of it in Appendix A3–A4. We will better reference these results in the main text and will add a new figure to further highlight double robustness. Our experiments explore the performance of our method in the face of key challenges like model misspecification, limited training data, and covariate dimensionality. For example: - Figures A2, A6: Vary censoring model training size; performance improves as quality increases. - Figures A5, A9: Add irrelevant covariates in censoring model; overfitting degrades performance. - Figures A3, A7: Vary both models’ training size; highlight double robustness. Figure A2 is particularly illustrative: with a poor survival model, performance improves rapidly as the censoring model improves. We will complement this with a similar figure where the survival model is more accurate and our method remains robust even if the censoring model is poor. ## Different Censoring Models While the main text uses GRF for simplicity, Appendix A3.2 includes additional experiments with other censoring models (see p.18). We limited real-data results to GRF to avoid overwhelming the appendix, which already contains 16 supplementary figures and 7 tables. We believe these experiments sufficiently support our claims. ## Survival Modeling in Benchmark Data Sets We will clarify that our comment about “simpler” datasets refers to the fact that the uncalibrated survival model performs reasonably well, suggesting $T|X$ is easier to estimate in these benchmarks. However, our synthetic experiments show that when survival modeling is hard, alternative methods fail—whereas ours maintains valid coverage as long as either model is accurate. If the survival model is perfect, conformal inference may not be needed—but in practice, we don’t know if it is, and our method offers stronger protection than alternative approaches. ## LPB Units LPBs are in different units across datasets and should not be directly compared across them.
Summary: This paper proposes a new conformal inference approach specifically for right-censored data, aimed at constructing lower prediction bounds (LPBs) for survival times. The method is theoretically asymptotically doubly robust and demonstrates strong empirical results, offering more informative and reliable LPBs compared to existing alternatives. Claims And Evidence: The paper proposal of doubly robust conformal inference method for survival analysis is supported by their theory and strong empirical results. Methods And Evaluation Criteria: The paper presents DR-COSARC (Doubly Robust Conformalized Survival Analysis under Right Censoring), which builds on recent work in conformalized survival analysis by focusing on more practical right-censored cases. It further incorporates doubly robust properties, making the proposed framework more reliable. DR-COSARC is designed with both fixed and dynamic cutoffs, enhancing its generalizability. The method is evaluated on both simulated and real-world datasets, and compared against several existing frameworks. Theoretical Claims: The paper theoretically proves (under the provided assumption) that the proposed DR-COSARC is doubly robust for both constant cutoffs (Theorem 3.3) and Adaptive custoffs (Theorem 3.6). Experimental Designs Or Analyses: While the experiments in Fig. 1 demonstrate that the proposed method performs well, its success is attributed to the framework’s ability to model the censoring distribution accurately. However, this aspect is not clearly demonstrated or fully substantiated. Supplementary Material: N/A Relation To Broader Scientific Literature: The contributions of the paper is related to the broader scientific literature for conformal inference. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review and your positive evaluation of our paper. We’re grateful for the opportunity to clarify some aspects of our empirical analysis, particularly since many important results are presented in the appendix and may have been unintentionally overlooked. The appendix contains an extensive and carefully structured set of numerical experiments specifically designed to study how the performance of our method depends on the accuracy of the survival and censoring models. These results provide strong empirical support for the theoretical double robustness of our method—namely, that valid and informative predictive bounds can be obtained when either distribution is modeled accurately. We believe that a more detailed look at these experiments will confirm that our claims are fully substantiated by the data. The main paper (Figure 1 and the discussion in Section 4) focuses on settings that vary the difficulty of estimating the survival model. These experiments show that our method approaches oracle performance when the survival distribution is easier to learn, while still outperforming existing alternatives in more challenging settings. These results are complemented by a broad set of additional experiments detailed in the appendix and only briefly summarized in Section 4 due to space constraints. For example: - Figures A2 and A6 vary the number of training samples used to fit the censoring model, showing that coverage improves as the censoring model becomes more accurate, especially in difficult settings where the survival model may be unreliable. - Figures A5 and A9 vary the number of irrelevant covariates used in the censoring model, demonstrating the impact of overfitting on performance. - Figures A3 and A7 examine the joint effect of training sample size on both models, further illustrating our method’s double robustness. These and many other results presented in Appendices A3 and A4 offer what we believe is a thorough empirical validation of our approach. While Section 4.3 of the main paper already points readers to these results, we now realize that some of this material may not have been sufficiently highlighted, and that even attentive readers could miss it. We will take full advantage of the extra page allowed in the camera-ready version to address this. In particular, we plan to improve how these experiments are referenced and summarized in the main text, and will likely move an important figure (such as A2 or A3) into the main body to make the role of the censoring model even more visible. This will ensure that all readers, regardless of how closely they examine the appendix, clearly see the breadth and depth of the empirical results supporting this paper. Thank you again for this helpful feedback!
null
null
null
null
Training Deep Learning Models with Norm-Constrained LMOs
Accept (spotlight poster)
Summary: This paper develops an algorithmic framework that can exploit and appropriate choice of norm for entire neural network with emphasis on hyperparameter transfer across model sizes. The algorithm called uSCG, unconstraint stochastic conditional gradient method, shows improvements both theoretically and practically when the norm-ball constraint matches the natural geometry of the problem. ## update after rebuttal My concerns have been addressed. Good paper. Claims And Evidence: the evidence presented supports the claims. Methods And Evaluation Criteria: The methods and evaluation both make sense. Theoretical Claims: The theoretical claims are reasonable though I have not checked the proofs in details. I really like the part where different optimizers get organized under the lmo framework and presented in a clean and clear way. Very insightful paper! Experimental Designs Or Analyses: The experiments are strong enough to support the claims for a mainly theoretical paper. Supplementary Material: I have briefly scanned through the material for the theoretical part. No strong inconsistency has been spotted. Relation To Broader Scientific Literature: This work is broadly related to series of works that aim to beat AdamW with new optimizer proposal, such as Shampoo, SOAP, Muon. Usually it's under the context of large language model pretraining and finetuning. Essential References Not Discussed: Not that i am aware of. Other Strengths And Weaknesses: The insight and systemic analysis is quite interesting, unifying everything under lmo gives me a new perspective on looking at things. Other Comments Or Suggestions: N/A Questions For Authors: 1. Since you build on top of modded-nanoGPT, I have to ask the question: what is your speedrun result against Muon? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are happy that the reviewer found the paper insightful. > "Since you build on top of modded-nanoGPT, I have to ask the question: what is your speedrun result against Muon?" The speedrun configuration is the 124M parameter model with 512 batchsize in Fig. 1. In this setting Scion consistently across 3 runs has slightly lower validation loss than Muon (with the same wallclock time), which would translate into a slightly better wallclock time if the number of iterations is reduced to exactly match the validation loss of Muon. The more substantial gain occurs at larger batch sizes. We conduct an additional experiment for the batch size 6144 configuration of Fig. 2, where we simply reduce the number of iterations until Scion matches the larger validation loss of Muon. We find that Scion can achieve the same validation loss as Muon with a 25% smaller wallclock time in this setting.
Summary: The paper introduces the unconstrained Stochastic Conditional Gradient method (uSCG), which builds upon classical Conditional Gradient methods by utilizing linear minimization oracles (LMOs) even for unconstrained optimization problems. This approach leverages the geometry of the problem by assigning different norms for different layers, particularly in language modeling tasks. The authors provide convergence guarantees for uSCG with constant batch sizes and introduce a principled framework for selecting effective constraints based on the input and output space geometries of neural network layers. The paper also discusses the relationship between their method and hyperparameter transfer techniques, such as the Maximal Update Parametrization, which allows for transferring optimal hyperparameters from smaller proxy models to larger ones. The numerical experiments show that the optimal learning rate for the proposed method to minimize validation loss is independent with the model width. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: Not applicable. Essential References Not Discussed: No. Other Strengths And Weaknesses: The theoretical novelty of this paper from section 5 is sound and concrete. Nevertheless, the algorithmic contribution of this paper is unclear. Both algorithm 1 and 2 are simply steepest descent methods with momentum. One minor novelty can be that the authors use different norms for different layers described in section 3.1. Specifically, the authors use Sign in the output layers. According to the numerical experiment, this change does make the optimal learning rate of the proposed method to be independent with the model width but the validation loss improvement is negligible in Figure 1. The experiments on batch size sensitivity does support that the proposed method has better validation loss compares to Muon when batch size is large. Nevertheless, the validation loss curves for all compared algorithms look to be increasing w.r.t. batch size and with the smallest batch size of 512, the difference between proposed method and Muon are not significant. Besides, the numerical experiments in this paper are restricted to nanoGPT. It would be better to try different transformer architecture to show the applicability of the proposed method in LLM pre-training. Other Comments Or Suggestions: The algorithmic descriptions in algorithm 1 and 2 are same. Is this a typo? Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We believe there are important misunderstandings that we address below. > "The algorithmic descriptions in algorithm 1 and 2 are same. Is this a typo?" This is not a typo: the two algorithms are distinct and solves different problems as detailed in Section 3. Algorithm 1 solves an unconstrained problem, while Algorithm 2 solves a norm-ball constrained problem. For deep learning this leads to a distinct behaviour of the norm of the weights between Algorithm 1 and 2 as illustrated in Fig. 8-9, since only Algorithm 2 ensures that the norm does not grow beyond the constraint radius $\rho$. This explicit norm control can be particularly important for numerical stability and to avoid overfitting, as e.g. seen in the CIFAR10 experiments were Algorithm 2 (Scion) works without the heuristic weight normalization otherwise present in the Muon implementation for this problem. > "Both algorithm 1 and 2 are simply steepest descent methods with momentum" Neither of our two proposed algorithms is steepest descent (Please see Section 4 were we explicitly compare with steepest descent). Algorithm 2 is a Conditional Gradient (aka Frank-Wolfe) based algorithm which applies to constrained problems, while with Algorithm 1 we show that the LMO can even be used for unconstrained problems. One important distinction with steepest descent is that the stepsize $\gamma$ does not have to depend on the Lipschitz constant $L$ (see our main results Lemma 5.3-5.6). Intuitively, this is due to the scale invariance of the LMO, which makes the algorithm more stable (the algorithm is less sensitive to the curvature). > "The experiments on batch size sensitivity does support that the proposed method has better validation loss compares to Muon when batch size is large. Nevertheless, the validation loss curves for all compared algorithms look to be increasing w.r.t. batch size" The loss is increasing with increasing batch size in the experiments because we keep the total number of tokens fixed. The validation loss as a function of batch size is expected to be worse for any optimizer in this setting – the comparison criterion is instead the rate with which performance deteriorates. As the reviewer points out, Scion have better validation accuracy than the baselines for larger batch sizes. One way to quantify the validation loss improvement, is to ask how much quicker Scion can achieve the same validation accuracy as one of the baselines. To this end we conduct an additional experiment for the large batch size of 6144, where we simply reduce the number of iterations until Scion matches the larger validation loss of Muon. We find that Scion can achieve the same validation loss as Muon with a 25% smaller wallclock time.
Summary: The authors propose a new stochastic family of algorithms exploiting the so-called Linear Minimization Oracle (LMO) to solve optimization problems. It provides a more general framework and unifies many other algorithms as special cases. Theoretical guarantees as well as significant speed-up on the training of models such as nanoGPT demonstrate the effectiveness of their proposal. Claims And Evidence: All the claims and experimental evidence are clearly presented and justified. I hope the code is publicly available once the paper is published. Methods And Evaluation Criteria: The proposed framework is pretty interesting and well-presented. The experiments conducted look credible and show certain improvement over existing methods. Theoretical Claims: I found all theoretical claims sound. I admit that I did not check the proof very carefully in the Appendix. However, for the parts that I did check, there is not any mathematical error. To be verified: In Lemma D.1, I think it is the right constant is $L_2$, and not $L$. Experimental Designs Or Analyses: I think the experimental designs are justified and credible. Supplementary Material: I read several proofs in the Supplementary Materials, notably Sections C, D. Relation To Broader Scientific Literature: The paper provides a general framework to unify several methods in the literature. Essential References Not Discussed: One particular line of related research (that is quite old) is natural gradient descent [1]. It also exploits the geometry of the function by defining a pseudo-metric in the parameters space by KL-divergence. It is not very popular nowadays since: 1) Calculating the Fischer matrix and its inverse is quite expensive, and 2) The empirical performance in stochastic setting is not very impressive. However, I think it is worth mentioning because it is among the first papers stepping away from the Euclidean metric and proposing a new metric for optimization, to the best of my knowledge. [1]: S. Amari, "Natural Gradient Works Efficiently in Learning," in Neural Computation, vol. 10, no. 2, pp. 251-276, 15 Feb. 1998, doi: 10.1162/089976698300017746. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: I have one question: From what I understand, all the LMOs calculated in Tables 2, 3 and 4 do not take into account the non-linear activation function. If that is the case, do you think taking them into account can improve the optimization process? After all, the geometry of the data after the activation is what really matters, and not the pre-activated ones. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback. > "To be verified: In Lemma D.1, I think it is the right constant is $L_2$, and not $L$." The right constant is $L$, since smoothness is taken w.r.t. the arbitrary norm $\Vert\cdot\Vert$. This is what eventually yields the dependency on $L$ seen in the main results of Section 5. What might have caused the confusion is that there is a minor typo in the statement of Lemma D.1, since the guarantee of Lemma D.1 should be stated in terms of $\Vert\nabla f(\bar x^n)\Vert_*$ instead of $\Vert\nabla f(\bar x^n)\Vert_2^2$. This is just a typo though, that does not change the final result, since the following Lemma 5.3 is correctly stated in terms of the dual norm. We will correct the typo in the final version. > "One particular line of related research (that is quite old) is natural gradient descent [1]. It also exploits the geometry of the function by defining a pseudo-metric in the parameters space by KL-divergence. It is not very popular nowadays since: 1) Calculating the Fischer matrix and its inverse is quite expensive, and 2) The empirical performance in stochastic setting is not very impressive. However, I think it is worth mentioning because it is among the first papers stepping away from the Euclidean metric and proposing a new metric for optimization, to the best of my knowledge. " This is a good point. We will remember to cite and discuss natural gradient descent. > Geometry of activation functions It might be possible to further improve the performance by taking the geometry of activation functions into account, but it seems challenging. One place to maybe look for ideas is the vast literature on initialization. --- Rebuttal Comment 1.1: Comment: Thanks the authors for their responses. I have no further questions and I keep the score as it is.
Summary: This paper proposes a new approach to optimizing DNNs that leverages the Linear Minimization Oracle (LMO) over norm-constrained sets. The core idea is to adapt the optimizer a priori to the geometry of the neural network, rather than relying solely on on-the-fly adaptive methods like Adam. The authors introduce a new stochastic algorithm, unconstrained Stochastic Conditional Gradient (uSCG), which surprisingly applies the LMO concept to unconstrained problems. They also adapt the existing constrained Stochastic Conditional Gradient (SCG) method for non-convex deep learning. The key to their approach is choosing a norm for the LMO that reflects the structure of the neural network, particularly operator norms between layers. The paper claims that this approach leads to improved performance, better batch size robustness, memory efficiency, and zero-shot hyperparameter transferability across models of different widths. The authors provide theoretical convergence guarantees and empirical results on nanoGPT language modeling and CIFAR-10 image classification tasks. The framework unifies several known optimization algorithms like Normalized SGD, SignSGD, and Muon. Claims And Evidence: The claims are generally supported by evidence. However, while the claim of improved performance and hyperparameter transferability is well supported for language modeling, the evidence is weaker for image classification tasks, lacking comparison to other optimizers in this domain. Methods And Evaluation Criteria: The core methodological idea of using norm-constrained LMOs, and particularly the uSCG algorithm, is novel and makes sense for adapting the optimizer to the network structure. The connection to operator norms is theoretically well-motivated. The nanoGPT experiments, including the use of a 3B parameter model, are appropriate for evaluating performance and hyperparameter transfer in language modeling. The CIFAR-10 experiments are insufficient for evaluating the method's general applicability. Comparison to other optimizers should be made on more diverse datasets. Theoretical Claims: I didn't thoroughly check the correctness of the proofs. Experimental Designs Or Analyses: The nanoGPT experiments are well-designed to test the core claims on language modeling. Varying model width, batch size, and comparing to Adam and Muon provides a strong evaluation. The CIFAR-10 experiments show the transferability of the optimal stepsize, but they do not establish the method's competitiveness in image classification. Supplementary Material: The supplementary material is comprehensive and well-organized, providing more details for understanding and reproducing the results. Relation To Broader Scientific Literature: The paper connects to several areas of the literature, including conditional gradient methods and adaptive optimization. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: The paper could benefit from a clearer explanation of the intuition behind why uSCG works for unconstrained problems. Questions For Authors: Comparison to other optimizers should be made on more diverse datasets Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and address the remaining concerns below. > "The CIFAR-10 experiments show the transferability of the optimal stepsize, but they do not establish the method's competitiveness in image classification" To establish competitiveness on image classification we conduct a speedrun to achieve the 94% test accuracy target on the [airbench](https://github.com/KellerJordan/cifar10-airbench) codebase. Scion achieves a 94.08% test accuracy (averaged over 200 runs) in the same wallclock time that the SOTA method Muon uses to achieve the target accuracy. This provides a 13.6% speedup over the tuned baseline of SGD with momentum. Note that Scion achieves this SOTA performance without the ad-hoc weight normalization otherwise used in the airbench implementation of Muon. > "The paper could benefit from a clearer explanation of the intuition behind why uSCG works for unconstrained problems." It was indeed also surprising to us initially that we could show convergence for uSCG. After applying smoothness in the analysis there are mainly two terms to address: i) the _direction_ of the update (given by an inner product), and ii) the _magnitude_ of the update. For simplicity, let us consider the deterministic case where we have $f(x^{k+1}) \leq f(x^k) + \gamma\rho \langle \nabla f(x^k), \operatorname{lmo}(\nabla f(x^k)\rangle + \tfrac{\gamma^2\rho^2 L}{2}$ The main insight is the relationship between the LMO over a norm-ball and the definition of the dual norm. Specifically, the value that the LMO attains on the boundary of the norm-ball is exactly the dual norm of the gradient. So the _direction_ of the update (the inner product) is still informative and it in fact decreases the function value by the dual norm of the gradient. However, the _magnitude_ of the update (the last term) will be large even close to a solution, but this can be taken care of by the stepsize choice (as made precise by Lemma 5.3 and 5.4). Notice that we do not need to know the Lipschitz constant $L$ to select the stepsize $\gamma$ as is otherwise the case in e.g. gradient descent. The intuitive reason for this is the scale invariance of the LMO. We will remark on these intuitions in the final version.
null
null
null
null
null
null
Understanding and Improving Length Generalization in Recurrent Models
Accept (poster)
Summary: The paper investigates why recurrent models fail to generalize to sequence lengths beyond their training context and proposes methods to improve length generalization. They also propose a metric (Effective Remembrance) which basically captures the difference in a model's next token distribution at a point, when considering two different context lengths (starting at t=0 or t=some other index in the past >0). Claims And Evidence: Most claims are well-supported, particularly the core hypothesis that unexplored states lead to generalization failure and that state-based training interventions can fix the issue. Methods And Evaluation Criteria: The paper presents comparisons using only the perplexity metric on The Pile dataset (so the task is next token prediction), and on a very synthetic long context task (passkey retrieval). It would be good to show the results on a long context task derived from real world data (maybe LongBench or even something simpler that requires 2 or 3 hop reasoning). Theoretical Claims: No. Experimental Designs Or Analyses: - Supplementary Material: No. Relation To Broader Scientific Literature: The paper shows that initialising the state of a sequence model with noise that has been fit to the typical distribution of how the states are at the end of a context window (say N), helps to train/finetune models with context length N, but genralise to 2N (since now the model has been trained while seeing what state distributions sort of look like at the end of 2N steps of context even if the gradient was not back propagated 2N steps). This tries to mimic what was effectively done when other works trained with TBTT and carried forward the state into a new rollout starting from the end point of the previous rollout of a sequence model - but this time with fitted noise or state passing - fitted noise intervention works well for the small models but not a bigger model, state passing works well for both. Essential References Not Discussed: - Other Strengths And Weaknesses: I think the proposed approach of trying to understand train-time randomisations/methods to encourage length generalization makes sense and is useful. Presenting results on a long-context generalization benchmark that is not very toy or synthetic would significantly strengthen the paper. Other Comments Or Suggestions: - Questions For Authors: Am I correct in assuming that state passing is different form TBTT since the state value (say s) used to intialise a new rollout at training iteration i, for batch element say 0 - is not related in any way to previous states that might have been seen in the prior context that corresponds to batch element 0? So if we have a context of length 2N, but we are only able to train for context lengths of N, it is not the case that we are using the states at the end of N to initialize the rollout when we go from N+1 to 2N in the next batch (because the states at the end of the batch are shuffled - so while in some cases they might actually correspond to the correct context/trajectory - it many cases it will not?). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are encouraged to see that the reviewer thinks that our unexplored states hypothesis is well supported and that our interventions on the initial state are useful to achieve length generalization. We provide responses to their questions: > RE: Results on long-context generalization benchmarks besides passkey retrieval to strengthen the paper We followed the suggestion of the reviewer and **present new results for two long context benchmarks.** The first one is BABILong [1], a challenging benchmark which tests both the common sense understanding of the models as well as its ability to capture long range dependencies in text. We present results for the baseline and our interventions in the following link: https://postimg.cc/6TjDhBZ0 **It can be observed that state passing enhances the length extrapolation capabilities of the model, improving performance from ~9% to ~24% in sequences of length $256k$** (we recall that the model is trained and finetuned on sequences of length $2K$). This reinforces our claim that state passing is not only useful to fix the diverging perplexity of established language models, but also improves helps length generalization by exposing them to more initial states during training and thus enhancing their ability to handle long contexts. The second benchmark is Synthetic Copying [2], which consists in copying an arbitrary sequence of tokens. The results are shown in the following link: https://postimg.cc/ctwryLQV **It can be seen that state passing greatly helps length extrapolation, improving the accuracy in sequences three times longer than those seen during training from 27% to 47%.** We believe these new experiments highlight the length extrapolation capabilities of the interventions on benchmarks that are closer to real world data, as the reviewer suggested. Finally, regarding the Longbench benchmark specifically, unfortunately we lack the computational resources to train and evaluate models that can achieve decent performance on this benchmark. Even models of more than 7B parameters are barely better than random guessing (see Table 2 in [3]), and we have focused our work on models in the 1B scale and below. However, we find this benchmark very relevant to our methods and will include it as an avenue for future work in the final version of the paper. [1] Kuratov, Y. et al.(2024). _BABILong: Testing the limits of LLMs with long context reasoning-in-a-haystack_. In _The Thirty-eighth Conference on Neural Information Processing Systems Datasets and Benchmarks Track_. [2] Jelassi, S. et al. (2024). Repeat After Me: Transformers are Better than State Space Models at Copying. <i>Proceedings of the 41st International Conference on Machine Learning</i> [2] Bai, Y. et al. (2024). LongBench: A bilingual, multitask benchmark for long context understanding. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. > RE: The difference between state passing and TBTT resides in whether the initial state is related to the context of the current sequence That is correct: the difference between state passing and TBTT is that in state passing the initial state is unrelated to the given sequence, whereas in TBTT the initial state corresponds to prior context of the sequence. From an implementation point of view, the only difference is that in state passing the dataset is shuffled, whereas in TBTT the dataloader carefully orders the samples so that when passing a final state to the next sequence, such final state corresponds to the prior context of the sequence. As the reviewer points out, in principle it could be possible that in state passing the final state would correspond to the correct context, but the probability negligible given that the datasets are so large. Even though in practice the implementation differences are small, in theory the methods are doing very different things. Both methods can be seen as sampling initial states from a given distribution. State passing samples initial states from the distribution of final states (for convenience, we simply take the final state of the previous batch, which is a good enough approximation because the batches are unrelated). In contrast, TBTT samples states from a degenerate distribution, which specifically consists of the final state of the previous context of the sequence. A very interesting result of our paper is that TBTT is sufficient but not necessary for length generalization, given that state passing also works (or even sampling from random or fitted noise can be good enough). Besides being a practical and efficient way of enabling length generalization, we believe our results for these interventions shed light on the core issue with length generalization: models fail to length generalize when they are not trained on the distribution of states attained when processing a sequence (our "unexplored states hypothesis").
Summary: The authors study length generalization in recurrent models. They begin with an empirical analysis of length generalization failures for Mamba v1 / v2 and gated linear attention. They define a new metric, "Effective Remembrance", to quantify the influence of the context prefix on a model's predictions, and show that Mamba v2 has high Effective Remembrance for early tokens. They hypothesize that length generalization failures are due to incomplete exploration of the recurrent state distribution during training and propose four state initializations to mitigate this. They provide empirical results showing that post-training using these initialization schemes substantially improves length generalization measured in terms of perplexity. Claims And Evidence: The authors claim that: 1. Models fail to generalize when they reach states that were not explored during training (the "unexplored state hypothesis") 2. Length generalization in recurrent models can easily be improved by post-training with simple interventions on the state initialization. The evidence for 2 is quite good given their experimental results, as long as you're comfortable characterizing length generalization in terms of perplexity. The evidence for 1 does not seem as strong. There is no direct investigation of the unexplored state hypothesis, even though this would seem to be straightforward - e.g by comparing the distribution of states under short vs long contexts and correlating their discrepancy with some measure of performance. Methods And Evaluation Criteria: Effective remembrance is an intuitive way to measure the influence of a context prefix. The interventions described for the state are clear, though there is not much effort to distinguish why you'd prefer one over the other, apart from the empirical results. In terms of evaluation, length generalization is defined and evaluated almost entirely in terms of perplexity. While this is standard for the literature, it's worth noting that there meaningful debate as to whether perplexity is really informative for long-context performance (see below). The authors also study performance on a passkey task, but there are many other instances of long-context tasks, or even whole benchmarks, that could provide an alternative set of evaluation criteria for length generalization. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments are sound overall. It might have been useful to include a "negative control" for the post-training, perhaps by post-training on the same amount of data with the standard zero-initialization for the state. Supplementary Material: I read the appendix. No code was provided. Relation To Broader Scientific Literature: Length generalization is a long-running topic in sequence modeling, including for recent architectures like Mamba. Recurrent models have long been studied in terms of the statistical properties or dynamics of the state, including aspects that may lead to better performance over long sequences ("long memory") or more stable behavior. Essential References Not Discussed: There is an active debate on whether perplexity is informative for long-context performance. Recent works include [1, 2], both of which observe low correlation between perplexity and performance on benchmarks such as LongBench [3]. It would be useful to cite this discussion and to hear the authors' thoughts given their approach in this paper. [1] Fang, L., et al. (2024). What is Wrong with Perplexity for Long-context Language Modeling?. arXiv preprint arXiv:2410.23771. [2] Hu, et al. (2024). Can Perplexity Reflect Large Language Model's Ability in Long Text Understanding?. arXiv preprint arXiv:2405.06105. [3] Bai, Y., et al. (2023). Longbench: A bilingual, multitask benchmark for long context understanding. arXiv preprint arXiv:2308.14508. Other Strengths And Weaknesses: The paper is well written and easy to follow. The claims are stated clearly and the methods are well defined. Other Comments Or Suggestions: Typos: - $x_t$, not $x_T$, in line 114? - I'd suggest using "et al." for the 1.5-page Llama-3 citation Questions For Authors: - Could the authors share their thoughts on using perplexity to evaluate length generalization? - Why not test the unexplored state hypothesis directly by comparing state distributions in short and long context scenarios? - The authors' effective remembrance analysis essentially concludes that the recurrent models (and Mamba v2 in particular) are too strongly influenced by observations in the distant past. This seems at odds to some extent with the main objective of modern recurrent architecture development, which is to produce models that are able to retain information over very long sequences. Moreover, it would seem that the "right" level of effective remembrance depends on the data generating process, so that less is not always better. Is there a balance to be struck here? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for its thoughtful analysis of the paper and concrete suggestions, and we are encouraged to see that they found the experiments sounds and the paper easy to follow. We provide responses to their discussion: > RE: Perplexity as an evaluation metric We understand the reviewer's concern that perplexity on its own might not be sufficient to asses the capabilities of the models. Thus, we have run two additional evaluations. (1) **BABILong** [1] : BABILong is a benchmark that tests the ability of the model to process a long context and respond to a natural language understanding question based on some facts in the context. The results are shown in the following link: https://postimg.cc/6TjDhBZ0 **It can be seen that TBTT and especially state passing greatly help the model achieve a better performance than the baseline, improving from ~9% to ~24% in a context length of $256k$**. (2) **Synthetic copying** [2] : We have also evaluated length generalization on the synthetic copying task, which consists in copying an arbitrary sequence of tokens. The results are shown in the following link: https://postimg.cc/ctwryLQV **It can be observed that state passing greatly helps length extrapolation, improving the accuracy in sequences three times longer that those seen during training from 27% to 47%.** We believe these new experiments highlight the length extrapolation capabilities of the interventions on the initial state and hope they help address the reviewer's concern on perplexity as an evaluation metric. [1] Kuratov, Y. et al.(2024). _BABILong: Testing the limits of LLMs with long context reasoning-in-a-haystack_. In _The Thirty-eighth Conference on Neural Information Processing Systems Datasets and Benchmarks Track_. [2] Jelassi, S. et al. (2024). Repeat After Me: Transformers are Better than State Space Models at Copying. <i>Proceedings of the 41st International Conference on Machine Learning</i> > RE: Supporting the unexplored states hypothesis by comparing state distributions in short and long contexts We thank the reviewer for this suggestion, which we have added to the final version of the paper. **We measured the distribution of the state depending on the sequence position, and found that the standard deviation (across all elements of the state) increases as the sequence position increases. Thus, when processing longer sequences the model encounter states from a distribution that it has not seen during training. The results are in the following link:** https://postimg.cc/HcpL3yWP **Moreover, in the figure we show that our state passing intervention fixes this issue by producing states whose distributions do not vary so much based on the sequence position.** This effect can also be observed in specific heads of layers: https://postimg.cc/LJ0vZsZd This shift in distribution is correlated with performance: the models' perplexity diverges after the training context because it encounters a state that has not been seen during training (see Figure 1a). **Thus, we believe these new findings reinforce our unexplored states hypothesis.** > RE: Is lower Effective Remembrance always better? Yes, we absolutely agree that the "right" level of Effective Remembrance is not necessarily as low as possible. A certain level of Rffective Temembrance is needed for the model to remember tokens from the past (if it were zero at time $t$, it would be "effectively" ignoring the tokens in positions $[0,t]$, which is undesirable when long range dependencies are needed). However, the Rffective Remembrance curves of the baseline models shown in Figure 1 are clearly wrong, given that they are too influenced for tokens that are far away in the past. The state passing intervention yields more reasonable Effective Remembrance curves, where the model places more focus on the recent context, which is correlated with better performance. More generally, we believe Effective Remembrance can be a useful tool when applied to other settings and we are eager to use it in future work. Several recent works point out that linear recurrent architectures may fail to achieve tasks like associate recall or copying because they cannot remember past tokens or store previous context into the state (compared to transformers, which do not compress the past context). This could be verified by showing that Effective Remembrance is too low in these tasks (which would be undesirable here, as the reviewer suggested). However, in the context of our work (language models failing to length generalize), the opposite is actually the case: Effective Remembrance is too large. We hypothesize this is due to the states being overparametrized and due to the models overfitting to the states that arise when processing short sequences (which motivate our unexplored state hypothesis and the the use of non-zero initial states to train on a wider distribution of states and fix the issue).
Summary: The paper explores the reasons for limited length generalization of recurrent (mainly modern linear ones like mamba, GLA) neural networks. The paper explores the hypothesis that this is due to "unexplored states" - i.e. the kind of states that occur after long context tend to be unfamiliar to models trained on shorter context (as such states are not attainable). The authors perform several empirical investigations surrounding this. The paper shows that the "effective remembrance" (a proposed measure that quantifies the influence of earlier tokens) is high for RNNs - and models struggling towards length generalization are highly affected by distant tokens. It shows training on short contexts can impede generalization further. All these suggests that the models struggle due to distribution shift of states that it encounter for longer sequence. To counteract it the paper explores various post-training interventions: 1. Random Gaussian noise to initialize states to increase the diversity of states a model encounters. 2. Fitted noise - from mean and variance of states occurring in longer contexts. 3. State passing - Passing some state of a different (randomly chosen) sequence end point as an initial state for another sequence. 4. Truncated Backpropagation Through Time. The paper shows that random Gaussian and Fitted noise does enhance generalization to an extent for certain models but still limited because they don't necessary simulate realistic long-sequence states. However, state passing and truncated backpropagation do enable effective generalization. Later the paper also demonstrates maintenance of long-term dependence with Fitted noise for passkey retrieval task. ## Update after rebuttal The authors address most of my concerns. I increased the score accordingly. Claims And Evidence: Mostly the claims are supported. Few points I am a bit critical of: 1. The passkey retrieval task seems to be shown only for Fitted noise - which did not appear as an effective solution in Pile. So I would be curious: 1. How does other methods particularly BPTT and state passing do on the task? If we are passing the state of different passkey-retrieval context - wouldn't that confuse the model? 1. Does passkey retrieval also involve length generalization tests? If not, it's also desirable to see if the models can generalize in situations where actual long range dependency exists and is enforced (unlike general language modeling). The paper also uses the general term "recurrent" to scope out the kind of models it explores instead of a more specific term like "linear recurrent". It doesn't seem to consider non-linear recurrent models (which have modern variants as well such as xLSTM [1]). Could be good to limit the scope more explicitly by specifying this if that's the intention. [1] xLSTM: Extended Long Short-Term Memory - Beck et al. Neurips 2024 Methods And Evaluation Criteria: Yes. Some potential misses are discussed above. Theoretical Claims: As far as I have checked, no fundamental issues. Experimental Designs Or Analyses: No strong issues stand out to me besides the one I already discussed. One issue I may mention (which is broader than this paper - so I will not particularly penalize this paper for this) is as follows: One point is that the context of length generalization matters. Simply length generalization in language modeling may not give the full picture because in the dominant distribution of natural language contexts -- it could be very well possible long-range dependencies and state tracking is not required. Even some popular solutions like AliBi and such seem to work by having more of a "forgetting" bias towards the past. It's unclear, how well length generalization can be achieved in other contexts where the final result is very sensitive to simple changes in the past. Passkey retrieval can be one example, but there are other contexts as well one could explore like logical inference and ListOps length generalization - examples of such exploration include [1,2,3,4] among others. Earlier SSMs like S4D were shown poor generalization there [4]. [1] Modeling Hierarchical Structures with Continuous Recursive Neural Networks - Ray Chowdhury et al. ICML 2021 [2] Ordered Memory - Shen et al. NeurIPS 2019 [3] The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization - Csordas et al. ICLR 2022 [4] Recursion in Recursion: Two-Level Nested Recursion for Length Generalization with Scalability - Ray Chowdhury et al. NeurIPS 2023 Supplementary Material: I glanced over the Appendix. Particularly tried to check if anything related to missing evidence are addressed there. Relation To Broader Scientific Literature: The paper is connected to the lineage of linear recurrent models like RWKV, GLA, Mamba. Prior papers on linear recurrent models have already attempted different investigations regarding length generalization. This papers continues that direction but focusing more on the unexplored state hypothesis and non-architectural interventions - like BPTT and passing states from other models. Key contributions seem to be theoretical formulation of a hypothesis why RNNs struggle to generalize and some initial empirical investigations both to support the hypothesis and explore some initial countermeasures to enable length generation. Essential References Not Discussed: Not as familiar with this specific direction of length generalization in linear RNN. However, this paper: The Illusion of State in State-Space Models - Merill et al. ICML 2024 Seems relevant to a degree. This may suggest some root issues in SSMs - in state tracking; and that could also limit length generalization in certain contexts. Simply hacking the initial state may not be the resolution in this case. xLSTM paper suggests better state-tracking - and should count as a model to further investigate if the title and abstract is kept general - i.e. making claims about "recurrent" neural networks and not "linear recurrent" neural networks more specifically. Other Strengths And Weaknesses: **Strengths** Overall, the paper is well-written. Despite any limitations it is a decent exploration of an idea. Exploration of the different post-training intervention techniques are interesting and can work as "quick fixes" for better generalization in the at least some contexts in the interim. **Weakness** 1. Besides the things mentioned above, to an extent the hypothesis also feels a bit "obvious". I am not sure how much scientific weight to give to it. Since recurrent models have the same parameters at every time step, in a sense, obviously, it should obviously generalize if the states are similar. And the corollary of that would be that if generalization is failing - the issue would be overfitting in the distribution of seen states. 1. It's not clear if we are getting an effective single intervention strategy that works both in general language modeling, and also in length generalization in contexts with long-range dependencies - passkey, listops etc. (part of which - especially for tasks requiring state-tracking - this might be also simply impossible for linear RNNs - given Merill et al's work cited above) Other Comments Or Suggestions: n/a Questions For Authors: 1. Does passkey retrieval involve length generalization tests? 1. Are the other strategies BPTT/State-passing tested on passkey retrieval? 1. How about the zero-shot performance of the post-training intervention models on passkey? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed response and we appreciate that they find most our claims well supported, including our analysis on how state passing and TBTT enable length generalization because they simulate realistic states. We provide answers to their questions and observations: > RE: Does passkey involve length generalization? Yes, passkey retrieval involves length generalization, as the models have only been finetuned on samples of length $T=2048$, yet solve the task at length up to $256k$. > RE: Effectiveness of state passing when finetuning in passkey, and zero shot performance in passkey of models post-trained with interventions Originally, we decided to run the experiment with the fitted noise intervention, as in our opinion it is the more suited to this setting. As the reviewer mentions, it is somewhat unnatural to do state passing in this task, as it would be equivalent to providing a context containing a previous passkey. But we recognize that it is natural to wonder about the performance of state passing and TBTT, so here we also provide additional results for these two settings, where **overall state passing and TBTT bring some benefits to length generalization:** State passing finetuning in passkey: https://postimg.cc/vxwhrDXB Zero shot of TBTT post-trained checkpoints: https://postimg.cc/rKkcmJG6 > RE: Do the interventions achieve length generalization for other long context tasks? We agree with the reviewer that perplexity and passkey retrieval might not be enough to assess the long context capabilities of the models, **so we have run experiments on two more long context tasks that are related to core aspects of language modeling**. Due to limited space, we refer to the response to Reviewer b5dM for more details on the tasks. Results for BABILong: https://postimg.cc/6TjDhBZ0 Results for the synthetic copying task: https://postimg.cc/ctwryLQV **It can be seen that state passing and TBTT bring significant benefits in length extrapolation for these tasks**, for example by improving length extrapolation on sequences up to length 256k in BABILong. We also thank the reviewer for providing more related references on length generalization, which we will include in the paper as related work and avenues for future work. > RE: Scope of term "recurrent models" **We have added new results for another architecture, RWKV-v6, and we have observed the same phenomenon where the models trained on short context with zero-initialized states diverge, whereas post-training with state passing enables length generalization** https://postimg.cc/zLkhvF2x We hypothesize that other modern recurrent models will exhibit a similar behavior, but we agree with the author that all the studied models fall under the category of linear recurrent architectures that accept a formulation in terms of SSMs. Thus, we will change the wording in the final version to more clearly set the scope of the claims to the linear recurrent models for which we show experiments, and cite xLSTM as a different recurrent model which might have other behavior. > RE: the "unexplored state hypothesis" being obvious Indeed we agree with the reviewer that the hypothesis feels very intuitive in hindsight, but we believe our work is very relevant to the community for two reasons. Firstly, many recent works deal with length generalization by intervening on the architecture mechanism (see specific examples in Section 6), or by proposing new architectures and comparing their performance beyond the training context (for example see [1] or [2]). In contrast, our work: **(1) proposes the "unexplored states hypothesis" as a framework to understand the length generalization of recurrent models by reasoning about the distribution of states, and not the architecture mechanism**; **(2) systematically analyzes four training interventions which are motivated by the "unexplored states hypothesis**"; and **(3) shows that the interventions enable length generalization without changes to the architecture.** Secondly, even though it is natural to think that models fail to length generalize because they are out of distribution somehow, **we believe that it is a valuable contribution to propose training with several types of non-zero initialized states to fix this issue**. More specifically, we show that length generalization is enabled when the model is trained with initial states that resemble the states that are attained when processing long sequences. **We believe this finding is both intuitive and interesting, which we consider a strength of our work and hope makes a shift in the community towards simple (yet underutilized) training interventions with non-zero initialized states**. [1] Yang, S. et al. (2025). _Gated Delta Networks: Improving Mamba2 with Delta Rule_. In _The Thirteenth International Conference on Learning Representations_. [2] Sun, Y. et al (2024). _Learning to (Learn at Test Time): RNNs with Expressive Hidden States --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. It addresses some of my concerns and the additional results should strengthen the paper. I increased the score to 4.
Summary: The authors in this paper propose a framework to analyze the problem of length generalization in recurrent networks. The authors primarily focus on State Space Models and how they behave when test sequences are significantly longer than training sequences by studying their response to 4 training interventions: (1) initializing the SSM with a random initial state, (2) learning the distribution from which the initial state from (1) is sampled, (3) initializing the SSM with the final state of a different sequence and (4) using truncated back propagation through time. The authors claim that the proposed simple interventions lead to significant improvement in length generalization when compared to vanilla Mamba SSMs trained without these interventions. Claims And Evidence: - I believe that the claims about state space models benefitting from the abovementioned training interventions is validated with sound experimentation. - The authors however claim that they have developed a theoretical and empirical framework to understand length generalization broadly in recurrent models. This claim is not validated as the class of recurrent architectures evaluated in this work are (mostly just) SSMs and in addition, a specific type of hardware-efficient transformer (GLA). Methods And Evaluation Criteria: - I believe the proposed methods and evaluation criteria are necessary and make sense for the problem of length generalization. However, the tasks used in this paper seem quite simplistic in nature and don't stress test the training interventions on more challenging length generalization problems. - Using perplexity to measure length generalization I believe only measures the stability of the recurrent state space at test sequence lengths beyond what was experimented with during training. However, it doesn't study whether the extrapolation state space retains the same high expressivity (seen during training) in this unseen domain of longer recurrent sequences. - I appreciate the authors evaluating their interventions on the passkey retrieval task, however, this task seems quite simplistic and would request the authors to consider tasks such as Maze Solving task in [1, 2] and the incremental grouping task in [3]. Evaluating on these more challenging tasks makes the work more impactful and practical. References: 1. Bansal, A., Schwarzschild, A., Borgnia, E., Emam, Z., Huang, F., Goldblum, M., & Goldstein, T. (2022). End-to-end algorithm synthesis with recurrent networks: Extrapolation without overthinking. Advances in Neural Information Processing Systems, 35, 20232-20242. 2. Veerabadran, V., Ravishankar, S., Tang, Y., Raina, R., & de Sa, V. (2023). Adaptive recurrent vision performs zero-shot computation scaling to unseen difficulty levels. Advances in Neural Information Processing Systems, 36, 18132-18145. 3. Goetschalckx, L., Govindarajan, L. N., Karkada Ashok, A., Ahuja, A., Sheinberg, D., & Serre, T. (2023). Computing a human-like reaction time metric from stable recurrent vision models. Advances in neural information processing systems, 36, 14338-14365. Theoretical Claims: NA. Experimental Designs Or Analyses: The authors have evaluated the effect of different training context lengths, number of trainable parameters and post-training interventions on a series of SSM architectures (Mamba-1, Mamba-2) and a hardware-efficient transformer architecture (GLA) in the context of length generalization. This experimental design does not necessarily reflect any lack of rigor, but like I mentioned, I find the task of passkey-retrieval to be quite simplistic and the choice of architectures to be incoherent with the overall phrasing of the contributions applying generally to all types of RNNs. Supplementary Material: Yes. All parts of the supplementary information were reviewed. Relation To Broader Scientific Literature: The work is very relevant to current art in language modeling (LM) with recurrent architectures. Developing LMs with stable hidden state spaces is crucial to efficiently scaling performance at long sequence lengths during inference time. Hence, I find this work to be well situated to broader scientific literature on sequence modeling. Essential References Not Discussed: I believe the two main interventions studied here (state passing and T-BPTT) are studied in other RNN models in prior art. So I don't agree that these interventions are novel contributions in this work. I believe that the authors have applied these interventions found in prior RNN length generalization literature on SSMs and GLA, which I believe is the contribution of the current work. Prior findings list: 1. State passing: This is essentially what is studied under the name of Incremental Progress Training in the following paper on recurrent architectures. Bansal, A., Schwarzschild, A., Borgnia, E., Emam, Z., Huang, F., Goldblum, M., & Goldstein, T. (2022). End-to-end algorithm synthesis with recurrent networks: Extrapolation without overthinking. Advances in Neural Information Processing Systems, 35, 20232-20242. 2. T-BPTT: Truncated back propagation through time is a well-known prior approach to learning stable recurrent models without gradient collapse as the authors have included in their references. This also is not a novel contribution of this paper. 3. There is a line of recent works in length generalization in recurrent models that is not mentioned in the related work. I request the authors to please look at this list of prior work in length generalization in RNNs, and add comment on how their proposed work is related to these highly related prior art. i) Bansal, A., Schwarzschild, A., Borgnia, E., Emam, Z., Huang, F., Goldblum, M., & Goldstein, T. (2022). End-to-end algorithm synthesis with recurrent networks: Extrapolation without overthinking. Advances in Neural Information Processing Systems, 35, 20232-20242. ii) Veerabadran, V., Ravishankar, S., Tang, Y., Raina, R., & de Sa, V. (2023). Adaptive recurrent vision performs zero-shot computation scaling to unseen difficulty levels. Advances in Neural Information Processing Systems, 36, 18132-18145. iii) Goetschalckx, L., Govindarajan, L. N., Karkada Ashok, A., Ahuja, A., Sheinberg, D., & Serre, T. (2023). Computing a human-like reaction time metric from stable recurrent vision models. Advances in neural information processing systems, 36, 14338-14365. Other Strengths And Weaknesses: NA. Please refer to my above review. Other Comments Or Suggestions: NA. Questions For Authors: I have already conveyed my questions and concerns in other parts of the review. I thank the authors for their submission and look forward to reading their rebuttal. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback and helpful discussion around length generalization in recurrent models more broadly. We are encouraged to see that the reviewer believes this type of work is relevant and that our training intervention to the initial states of SSMs are sound. We provide answers to their questions: > RE: Experiments limited to SSMs and Gated Linear Attention We understand the reviewer's concern about our results being specific to SSMs and Gated Linear Attention (GLA). **For that reason, we have performed a new experiment in another recurrent architecture, RWKV-v6 [1], and have found a similar phenomenon**: the model trained on short contexts and zero-initialized states diverges, but post-training with state passing enables length generalization: https://postimg.cc/zLkhvF2x We will include this result in the final version of the paper to increase the scope of studied recurrent models. We also tried using DeltaNet and Gated DeltaNet [2], but the open source implementations seem to have a bug when using a non-zero initial state (the output of the forward applied to a long context versus substituting part of the context with an initial state do not match), so we could not test our interventions. We hypothesize that the benefits of training with non-zero initialized states will also apply to other modern recurrent networks, but we agree with the reviewer's remark and in the final version of the paper **we will make our claims more specific to the scope of architectures investigated.** [1] Peng, B. et al. (2024). _Eagle and Finch: RWKV with matrix-valued states and dynamic recurrence_. In _First Conference on Language Modeling_. [2] Yang, S. et al. _Gated Delta Networks: Improving Mamba2 with Delta Rule_. In _The Thirteenth International Conference on Learning Representations. > RE: Evaluation on other long context tasks besides passkey retrieval and perplexity We recognize that passkey retrieval on its own might not be enough to assess the length generalization capabilities of our interventions. Therefore, **we have evaluated the models on two additional long context tasks related to language modeling, which go beyond passkey retrieval in their complexity**. The initial state interventions bring significant benefits in these tasks. Due to limited space, we refer to the response to Reviewer b5dM for more details on the tasks. Results for the BABILong task: https://postimg.cc/6TjDhBZ0 Results for the synthetic copying task: https://postimg.cc/ctwryLQV > RE: References related to length extrapolation in recurrent networks and TBTT / state passing We thank the reviewer for providing these algorithmic extrapolation references. The references propose architecture modifications to enable length extrapolation in recurrent models, and also propose training them on a more diverse distribution of states (Incremental Progress Training). While related, our work is focuses on language modeling, where established models exhibit a diverging perplexity after the context length - which is surprising as this does not require length extrapolation, given that predicting a language token at say position 2500 should be roughly as hard as predicting it at position 2000. Several recent works in language modeling attempt to solve this length generalization issue by changing the inner mechanism of the models (see Section 6 for specific references), so **one of our key contributions resides in showing that length generalization in established language models is achievable through simple (yet underutilized) interventions on the initial state.** We also agree that some of these interventions were used in prior works, but we believe our main contribution lies in establishing a framework for understanding why they help in recurrent models and providing empirical experiments to support it. In particular, our work: (1) introduces the **"unexplored states hypothesis", which explains the poor generalization performance of recurrent models by reasoning about the distribution of the states attained when processing tokens beyond the training context**; **(2) systematically evaluates a range of interventions on the initial state, including random and fitted noise, which are naturally motivated by the "unexplored states hypothesis"**; and (3) **provides a deeper understanding of the state and how recurrent models processlong context through metrics like Effective Remembrance and the analysis of the results of the interventions**. Additionally, **we also show that these interventions enable length extrapolation in some tasks related to core aspects of language modeling, like passkey retrieval, synthetic copying and BABILong.** We agree that the algorithmic length extrapolation tasks of the provided references are important for improving machine learning models at large and some of the methods used are similar, so in the final version we will include them both as related work and as avenues for future work.
null
null
null
null
null
null